aid
string
mid
string
abstract
string
related_work
string
ref_abstract
dict
title
string
text_except_rw
string
total_words
int64
cs9809108
2949225035
We present our approach to the problem of how an agent, within an economic Multi-Agent System, can determine when it should behave strategically (i.e. learn and use models of other agents), and when it should act as a simple price-taker. We provide a framework for the incremental implementation of modeling capabilities in agents, and a description of the forms of knowledge required. The agents were implemented and different populations simulated in order to learn more about their behavior and the merits of using and learning agent models. Our results show, among other lessons, how savvy buyers can avoid being cheated'' by sellers, how price volatility can be used to quantitatively predict the benefits of deeper models, and how specific types of agent populations influence system behavior.
Within the MAS community, some work @cite_15 has focused on how artificial AI-based learning agents would fare in communities of similar agents. For example, @cite_6 and @cite_8 show how agents can learn the capabilities of others via repeated interactions, but these agents do not learn to predict what actions other might take. Most of the work in MAS also fails to recognize the possible gains from using explicit agent models to predict agent actions. @cite_9 is an exception and gives another approach for using nested agent models. However, they do not go so far as to try to quantify the advantages of their nested models or show how these could be learned via observations. We believe that our research will bring to the foreground some of the common observations seen in these research areas and help to clarify the implications and utility of learning and using nested agent models.
{ "abstract": [ "In multi-agent environments, an intelligent agent often needs to interact with other individuals or groups of agents to achieve its goals. Agent tracking is one key capability required for intelligent interaction. It involves monitoring the observable actions of other agents and inferring their unobserved actions, plans, goals and behaviors. This article examines the implications of such an agent tracking capability for agent architectures. It specifically focuses on real-time and dynamic environments, where an intelligent agent is faced with the challenge of tracking the highly flexible mix of goal-driven and reactive behaviors of other agents, in real-time. The key implication is that an agent architecture needs to provide direct support for flexible and efficient reasoning about other agents' models. In this article, such support takes the form of an architectural capability to execute the other agent's models, enabling mental simulation of their behaviors. Other architectural requirements that follow include the capabilities for (pseudo-) simultaneous execution of multiple agent models, dynamic sharing and unsharing of multiple agent models and high bandwidth inter-model communication. We have implemented an agent architecture, an experimental variant of the Soar integrated architecture, that conforms to all of these requirements. Agents based on this architecture have been implemented to execute two different tasks in a real-time, dynamic, multi-agent domain. The article presents experimental results illustrating the agents' dynamic behavior.", "I. Introduction, 488. — II. The model with automobiles as an example, 489. — III. Examples and applications, 492. — IV. Counteracting institutions, 499. — V. Conclusion, 500.", "The long-term goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. This paper outlines a gradual evolution in our formal conception of intelligence that brings it closer to our informal conception and simultaneously reduces the gap between theory and practice.", "" ], "cite_N": [ "@cite_9", "@cite_15", "@cite_6", "@cite_8" ], "mid": [ "1528079221", "2156109180", "1591263692", "" ] }
Learning Nested Agent Models in an Information Economy
In open, multi-agent systems, agents can come and go without any central control or guidance, and thus how and which agents interact with each other will change dynamically. Agents might try to manipulate the interactions to their individual benefits, at the cost of the global efficiency. To avoid this, the protocols and mechanisms that the agents engage in might be constructed to make manipulation irrational (Rosenschein and Zlotkin, 1994), but unfortunately this strategy is only applicable in restricted domains. By situating agents in an economic society, as we do in the University of Michigan Digital Library (UMDL), we can make each agent responsible for making its own decisions about when to buy/sell and who to do business with (Atkins et al., 1996). A market-based infrastructure, built around computational auction agents, serves to discourage agents from engaging in strategic reasoning to manipulate the system by keeping the competitive pressures high. However, since many instances can arise where imperfections in competition could be exploited, agents might benefit from strategic reasoning, either by manipulating the system or not allowing others to manipulate them. But strategic reasoning requires effort. An agent in an information economy like the UMDL must therefore be capable of strategic reasoning and of determining when it is worthwhile to invest in strategic reasoning rather than letting its welfare rest in the hands of the market mechanism. In this paper, we present our approach to the problem of how an agent, within an economic MAS, can determine when it should behave strategically, and when it should act as a simple price-taker. More specifically, we let the agent's strategy consist of learning nested models of the other agents, so the decision it must make refers to which of the models will give it greater gains. We show how, in some circumstances, agents benefit by learning and using models of others, while at other times the extra effort is wasted. Our results point to metrics that can be used to make quantitative predictions as to the benefits obtained from learning and using deeper models. Description of the UMDL The UMDL project is a large-scale, multidisciplinary effort to design and build a flexible, scalable infrastructure for rendering library services in a digital networked environment. In order to meet these goals, we chose to implement the library as a collection of interacting agents, each specialized to perform a particular task. These agents buy and sell goods/services from each other, within an artificial economy, in an effort to make a profit. Since the UMDL is an open system, which will allow third parties to build and integrate their own agents into the architecture, we treat all agents as purely selfish. Implications of the information economy. Information goods/services, like those provided in the UMDL, are very hard to compartmentalize into equivalence classes that all agents can agree on. For example, if a web search engine service is defined as a good, then all agents providing web search services can be considered as selling the same good. It is likely, however, that a buyer of this good might decide that seller s1 provides better answers than seller s2. We cannot possibly hope to enumerate the set of reasons an agent might have for preferring one set of answers (and thus one search agent) over another, and we should not try to do so. It should be up to the individual buyers to decide what items belong to the same good category, each buyer clustering items in possibly different ways. This situation is even more evident when we consider an information economy rooted in some information delivery infrastructure (e.g. the Internet). There are two main characteristics that set this economy apart from a traditional economy. • There is virtually no cost of reproduction. Once the information is created it can be duplicated virtually for free. • All agents have virtually direct and free access to all other agents. If these two characteristics are present in an economy, it is useless to talk about supply and demand, since supply is practically infinite for any particular good and available everywhere. The only way agents can survive in such an economy is by providing value-added services that are tailored to meet their customers' needs. Each provider will try to differentiate his goods from everyone else's while each buyer will try to find those suppliers that best meet her value function. We propose to build agents that can achieve these goals by learning models of other agents and making strategic decisions based on these models. These techniques can also be applied, with variable levels of efficacy, to traditional economies. A Simplified Model of the UMDL In order to capture the main characteristics of the UMDL, and to facilitate the development and testing of agents, we have defined an "abstract" economic model. We define an economic society of agents as one where each agent is either a buyer or a seller of some particular good. The set of buyers is B and the set of sellers is S. These agents exchange goods by paying some price p ∈ P , where P is a finite set. The buyers are capable of assessing the quality of a good received and giving it some value q ∈ Q, where Q is also a finite set. The exchange protocol, seen in Figure 1, works as follows: When a buyer b ∈ B wants to buy a good g, she will advertise this fact. Each seller s ∈ S that sells that good will give his bid in the form of a price p g s . The buyer will pick one of these and will pay the seller. All agents will be made aware of this choice along with the prices offered by all the sellers.. The winning seller will then return 1 the specified good. Note that there is no law that forces the seller to return a good of any particular quality. For example, an agent that sells web search services returns a set of hits as its good. Each buyer of this good might determine its quality based on the time it took for the response to arrive, the number of hits, the relevance of the hits, or any combination of these and/or other features. Therefore, it would usually be impossible to enforce a quality measure that all buyers can agree with. It is thus up to the buyer to assess the quality q of the good received. Each buyer b also has a value function V g b (p, q) for each good g ∈ G that she might wish to buy. The function returns a number that represents the value that b assigns to that particular good at that particular price and quality. Each seller s ∈ S, on the other hand, has a cost c g s associated with each good he can produce. Since we assume that costs and payments are expressed in the same units (i.e. money) then, if seller s gets paid p for good g, his profit will be Profit(p, c g s ) = p − c g s . The buyers, therefore, have the goal of maximizing the value they get for their transactions, while the sellers have the goal of maximizing their profits. information. These instances are defined in part by the set of other agents present, their capabilities and preferences, and the dynamics of the system. In order to precisely determine what these instances are, and in the hopes of providing a more general framework for studying the effects of increased agent-modeling capabilities within our economic model, we have defined a set of techniques that our agents can use for learning and using models. We divide the agents into classes that correspond to their modeling capabilities. The hierarchy we present is inspired by the Recursive Modeling Method (Gmytrasiewicz, 1996), but is function-based rather than matrixbased, and includes learning. We will first describe our agents at the knowledge level, stating only the type of knowledge the agents are either trying to acquire through learning, or already have (i.e. knowledge that was directly implemented by the designers of the agents), and will then explain the details of how this knowledge was implemented. At the most abstract level, we can say that every agent i is trying to learn the oracle decision function ∆ i : w → a i , which maps the state w of the world into the action a i that the agent should take in that state. This function will not be fixed throughout the lifetime of the agent because the other agents are also engaged in some kind of learning themselves. The agents that try to directly learn ∆ i (w) we refer to as 0-level agents, because they have no explicit models of other agents. In fact, they are not aware that there are other agents in the world. Any such agent i will learn a decision function δ i : w → a i where w is what agent i knows about its external world and a i is its rational action in that state. For example, a web search agent might look at the going price for web searches, in order to determine how much to charge for its service. Agents with 1-level models of other agents, on the other hand, are aware that there are other agents out there but have no idea what the "interior" of these agents looks like. They have two kinds of knowledge-a set of functions δ ij : w → a j which capture agent i's model of each of the other agents (j), and δ i : (w, a −i ) → a i which captures i's knowledge of what action to take given w and the collective actions a −i the others will take. We define a −i = {a 1 · · · a i−1 , a i+1 · · · a n }, where n is the number of agents. An agent's model of others might not be correct; therefore, it is not always true that δ j (w) = δ ij (w). The δ ij (w) knowledge for all j = i turns out to be easier to learn than the joint action δ i (w, a −i ) because the set of possible hypotheses is smaller. Agents with 2-level models are assumed to have deeper knowledge about the other agents; that is, they have knowledge of the form δ ij : (w, a −j ) → a j . This knowledge tells them how others determine which action to take. They also know what actions others think others are going to take, i.e. δ ijk : w → a k , and (like 1-level modelers) what action they should take given others' actions, δ i : (w, a −i ) → a i . Again, the δ ijk (w) is easier to learn that the other two, as long as all agents use the same features to discriminate among the different worlds (i.e. share the same w). Populating the knowledge If the different level agents had to learn all the knowledge then, since the 0level agents have a lot less knowledge to learn, they would learn it much faster. However, in the economic domain, it is likely that the designer has additional knowledge which could be incorporated into the agents. The agents we built incorporated extra knowledge along these lines. We decided that 0-level agents would learn all their knowledge by tracking their actions and the rewards they got. These agents, therefore, receive no extra domain knowledge from the designers and learn everything from experience. 1-level agents, on the other hand, have a priori knowledge of what action they should take given the actions that others will take. That is, while they try to learn knowledge of the form δ ij (w) by observing the actions others take (i.e. in a form of supervised learning where the other agents act as tutors), they already have knowledge of the form δ i (w, a −i ). In our economic domain, it is reasonable to assume that agents have this knowledge since, in fact, this type of knowledge can be easily generated. That is, if I know what all the other sellers are going to bid, and the prices that the buyer is willing to pay, then it is easy for me to determine which price to bid. We must also point out that in this domain, the δ i (w, a −i ) knowledge cannot be used by a 0-level agent. If this knowledge had said, for instance, that from some state w agent i will only ever take one of a few possible actions, then this knowledge could have been used to eliminate impossibilities from the δ i (w) knowledge of a 0-level agent. However, this situation never arises in our domain because, as we shall see in the following Sections, the states used by the agents permit the set of reasonable actions to always be equal to the set of all possible actions. The 2-level agents learn their δ ijk (w) knowledge from observations of others' actions, under the already stated assumption that there is common knowledge of the fact that all agents see the actions taken by all. The rest of the knowledge, i.e. δ ij (w, a −j ) and δ i (w, a −i ), is built into the 2-level agents a priori. As with 1-level agents, we cannot use the δ ij (w, a −j ) knowledge to add δ ij (w) knowledge to a 1-level modeler, because other agents are also free to take any one of the possible actions in any state of the world. There are many reasonable ways to explain how the 2-level agents came to possess the δ ij (w, a −j ) knowledge. It could be, for instance, that the designer assumed that the other designers would build 1-level agents with the same knowledge we just described. This type of recursive thinking (i.e. "they will do just as I did, so I must do one better"), along with the obvious expansion of the knowledge structure, could be used to generate n-level agents, but so far we have concentrated only on the first three levels. The different forms of knowledge, and their form of acquisition, are summarized in Table 1. In the following sections, we talk about each one of these agents in more detail and give some specifics on their implementation. Our current model emphasizes transactions over a single good, so each agent is only a buyer or a seller, but cannot be both. Agents with 0-level models Agents with 0-level models must learn everything they know from observations they make about the environment, and from any rewards they get. In our economic society this means that buyers see the bids they receive and the good received after striking a contract, while sellers see the request for bids and the profit they made (if any). In general, these agents get some input, take an action, then receive some reward. This framework is the same framework used in reinforcement learning, which is why we decided to use a form of reinforcement learning (Sutton, 1988) (Watkins and Dayan, 1992), for implementing learning in our agents. Both buyers and sellers will use the equations in the next few sections for determining what actions to take. But, with a small probability ǫ they will choose to explore, instead of exploit, and will pick their actions at random (except for the fact that sellers never bid below cost). The value of ǫ is initially 1 but decreases with time to some empirically chosen, fixed minimum value ǫ min . That is, ǫ t+1 = γǫ t if γǫ t > ǫ min ǫ min otherwise where 0 < γ < 1 is some annealing factor. Figure 1: View of the protocol. We show only one buyer B and three sellers S1, S2, and S3. At time 1 the buyer requests bids for some good. At time 2 the sellers send their prices for that good. At time 3 the buyer picks one of the bids, pays the seller the amount and then, at time 4, she receives the good. Level Form of Knowledge Method of Acquisition 0-level δ i (w) Reinforcement Learning 1-level δ i (w, a −i ) Previously known δ ij (w) Learn from observation 2-level δ i (w, a −i ) Previously known δ ij (w, a −j ) Previously known δ ijk (w) Learn from observation. Buyers with 0-level models. A buyer b will start by requesting bids for a good g. She will then receive all bids for good g and will pick the seller: s * = arg s∈S max f g (p g s )(1) This function implements the buyer's δ b (w) which, in this case, can be rephrased as δ b (p 1 . . . p |S| ). The function f g (p) returns the value the buyer expects to get if she buys good g at a price of p. It is learned using a simple form of reinforcement learning, namely: f g t+1 (p) = (1 − α)f g t (p) + α · V g b (p, q)(2) Here α is the learning rate, p is the price b pays for the good, and q is the quality she ascribes to it. The learning rate is initially set to 1 and, like ǫ, is decreased until it reaches some fixed minimum value α min . Sellers with 0-level models. When asked for a bid, the seller s will provide one whose price is greater than or equal 2 to the cost for producing it (i.e. p g s ≥ c g s ). From these prices, he will chose the one with the highest expected profit: p * s = arg p∈P max h g s (p)(3) Again, this function encompasses the seller's δ s (g) knowledge, where we now have that the states are the goods being sold w = g, and the actions are prices offered a = p. The function h g s (p) returns the profit s expects to get if he offers good g at a price p. It is also learned using reinforcement learning, as follows: h g t+1 (p) = (1 − α)h g t (p) + α · Profit g s (p)(4) where Profit g s (p) = p − c g s if his bid is chosen 0 otherwise (5) Agents with One-level Models The next step is for an agent to keep one-level models of the other agents. This means that it has no idea of what the interior (i.e. "mental") processes of the other agents are, but it recognizes the fact that there are other agents out there whose behaviors influence its rewards. The agent, therefore, can only model others by looking at their past behavior and trying to predict, from it, their future actions. The agent also has knowledge, implemented as functions, that tells it what action to take, given a probability distribution over the set of actions that other agents can take. In the actual implementation, as shown below, the δ i (w, a −i ) knowledge takes into account the fact that the δ ij (w) knowledge is constantly being learned and, therefore, is not correct with perfect certainty. Buyers with one-level models. A buyer with one-level models can now keep a history of the qualities she ascribes to the goods returned by each seller. She can, in fact, remember the last N qualities returned by some seller s for some good g, and define a probability density function q g s (x) over the qualities x returned by s (i.e. q g s (x) returns the probability that s returns an instance of good g that has quality x). This function provides the δ bs (g) knowledge. She can use the expected value of this function to calculate which seller she expects will give her the highest expected value. s * = arg s∈S max E(V g b (p g s , q g s (x))) = arg s∈S max 1 |Q| x∈Q q g s (x) · V g b (p g s , x)(6) The δ b (g, q 1 · · · q |S| ) is given by the previous function which simply tries to maximize the value the buyer expects to get. The buyer does not need to model other buyers since they do not affect the value she gets. Sellers with one-level models. Each seller will try to predict what bid the other sellers will submit (based solely on what they have bid in the past), and what bid the buyer will likely pick. A complete implementation would require the seller to remember past combinations of buyers, bids and results (i.e. who was buying, who bid what, and who won). However, it is unrealistic to expect a seller to remember all this since there are at least |P | |S| · |B| possible combinations. However, the seller's one-level behavior can be approximated by having him remember the last N prices accepted by each buyer b for each good g, and form a probability density function m g b (x), which returns the probability that b will accept (pick) price p for good g. The expected value of this function provides the δ sb (g) knowledge. Similarly, the seller remembers other sellers' last N bids for good g and forms n g s (y), which gives the probability that s will bid y for good g. The expected value of this function provides the δ s (g) knowledge. The seller s can now determine which bid maximizes his expected profits. p * = arg p∈P max(p − c g s ) · s ′ ∈{S−s} p ′ ∈P n g s ′ (p ′ ) if m g b (p ′ ) ≤ m g b (p) 0 otherwise(7) Note that this function also does a small amount of approximation by assuming that s wins whenever there is a tie 3 . The function calculates the best bid by determining, for each possible bid, the product of the profit and the probability that the agent will get that profit. Since the profit for lost bids is 0, we only need to consider the cases where s wins. The probability that s will win can then be found by calculating the product of the probabilities that his bid will beat the bids of each of the other sellers. The function approximates the δ s (g, p b , p 1 · · · p |S| ) knowledge. Agents with Two-level Models The intentional models we use correspond to the functions used by agents that use one-level models. The agents' δ i (w, a −i ) knowledge has again been expanded to take into account the fact that the deeper knowledge is learned and might not be correct. The δ ijk (w) knowledge is learned from observation, under the assumption that there is common knowledge of the fact that all agents see the bids given by all agents. Buyers with two-level models. Since the buyer receives bids from the sellers, there is no need for her to try to out-guess or predict what the sellers will bid. She is also not concerned with what the other buyers are doing since, in our model, there is an effectively infinite supply of goods. The buyers are, therefore, not competing with each other and do not need to keep deeper models of others. Sellers with two-level models. A seller will model other sellers as if they were using the one-level models. That is, he thinks they will model others using policy models and make their decisions using the equations presented in Section 4.3.2. He will try to predict their bids and then try to find a bid for himself that the buyer will prefer more than all the bids of the other sellers. His model of the buyer will also be an intentional model. He will model the buyers as though they were implemented as explained in Section 4.3.1. A seller, therefore, assumes that it has the correct intentional models of other agents. The algorithm he follows is to first use his models of the sellers to predict what bids p i they will submit. He has a model of the buyer C(s 1 · · · s n , p 1 · · · p n ) → s i , that tells him which seller she might choose given the set of bids p i submitted by each seller s i . The seller s j uses this model to determine which of his bids will bring him higher profit, by first finding the set of bids he can make that will win: P ′ = {p j |p j ∈ P, s j = C(s 1 · · · s j · · · s n , p 1 · · · p j · · · p n )} And from these finding the one with the highest profit: p * = arg p∈P ′ max(p − c g s )(9) These functions provide the δ s (g, p b , p 1 · · · p |S| ) knowledge. Tests Since there is no obvious way to analytically determine how different populations of agents would interact and, of greater interest to us, how much better (or worse) the agents with deeper models would fare, we decided to implement a society of the agents described above and ran it to test our hypotheses. In all tests, we had 5 buyers and 8 sellers. The buyers had the same value function V b (p, q) = 3q − p, which means that if p = q then the buyers will prefer the seller that offers the higher quality. The quality that they perceived was the same only on average, i.e. any particular good might be thought to have quality that is slightly higher or lower than expected. All sellers had costs equal to the quality they returned in order to support the common sense assumption that quality goods cost more to produce. A set of these buyers and sellers is what we call a population. We tried various populations; within each population we kept constant the agents' modeling levels, the value assessment functions and the qualities returned. The tests involved a series of such populations, each one with agents of different modeling levels, and/or sellers with different quality/costs. We also set α min = .1, ǫ min = .05, and γ = .99. There were 100 runs done for each population of agents, each run consisting of 10000 auctions (i.e. iterations of the protocol). The lessons presented in the next section are based on the averages of these 100 runs. Lessons From our tests we were able to discern several lessons about the dynamics of different populations of agents. Some of these lead to methods that can be used to make quantitative predictions about agents' performance, while others make qualitative assessments about the type of behaviors we might expect. We detail these in the next subsections, and summarize them in Table 2. Micro versus macro behaviors. In all tests, we found the behavior for any particular run does not necessarily reflect the average behavior of the system. The prices have a tendency to sometimes reach temporary stable points. These conjectural equilibria, as described in (Hu and Wellman, 1996), are instances when all of the agents' models are correctly predicting the others' behavior and, therefore, the agents do not need to change their models or their actions. These conjectural equilibria points are seldom global optima for the agents. If one of our agents finds itself at one of these equilibrium points, since the agent is always exploring with probability ǫ, it will in time discover that this point is only a local optima (i.e. it can get more profit selling/buying at a different price) and will change its actions accordingly. Only when the price is an equilibrium price 4 do we find that the agents continue to forever take the same actions, leaving the price at its equilibrium point. In order to understand the more significant macro-level behaviors of the system, we present results that are based on the averages from many runs. While these averages seem very stable, and a good first step in learning to understand these systems, in the future we will need to address some of the micro-level issues. We do notice from our data that the micro-level behaviors (e.g. temporary conjectural equilibria, price fluctuations) are much more closely tied, usually in intuitive ways, to the agents' learning rate α and exploration rate ǫ. That is, higher rates for both of these lead to more price fluctuations and shorter temporary equilibria. 0-level buyers and sellers. This type of population is equivalent to a "blind" auction, where the agents only see the price and the good, but are prevented from seeing who the seller (or buyer) was. As expected, we found that an equilibrium is reached as long as all the sellers are providing the same quality. This is the case for population 1 in Figure 2. Otherwise, if the sellers offer different quality goods, the price fluctuates as the buyers try to find the price that on the whole returns the best quality, and the sellers try to find the price 5 the buyers favor. In these populations, the sellers offering the higher quality, at a higher cost, lose money. Meanwhile, sellers offering lower quality, at a lower cost, earn some extra income by selling their low quality goods to buyers that expect, and are paying for, higher quality. As more sellers start to offer lower quality, we find that the mean price actually increases, evidently because price acts as a signal for quality and the added uncertainty makes the higher prices more likely to give the buyer a higher value. We see this in Figure 2, where population 1 has all sellers returning the same quality while in each successive population more agents offer lower quality. The price distribution for population 1 is concentrated on 9, but for populations 2 through 6 it flattens and shifts to the right, increasing the mean price. It is The prices are 0 · · · 19. The columns represent the percentage of time the good was sold at each price, in each population. In p1 sellers return qualities {8, 8, 8, 8, 8, 8, 8, 8}, in p2 its {8, 8, 8, 8, 8, 8, 7, 8}, and so on such that by p8 its {1, 2, 3, 4, 5, 6, 7, 8}. The highest peak in all populations corresponds to price 9. only by population 7 when it starts to shift back to the left, thus reducing the mean price, as seen in Figure 3. That is, it is only after a significant number of sellers start to offer lower quality that we see the mean price decrease. 6.3 0-level buyers and sellers, plus one 1-level seller. In these population sets we explored the advantages that a 1-level seller has over identical 0-level sellers. The advantage was non-existent when all sellers returned the same quality (i.e. when the prices reached an equilibrium as shown in population 1 in Figure 4), but increased as the sellers started to diverge in the quality they returned. In order to make these findings useful when building agents, we needed a way to make quantitative predictions as to the benefits of keeping 1-level models. It turns out that these benefits can be predicted, not by the population type as we had first guessed, but by the price volatility. We define volatility as the number of times the price changes from one auction to the next, divided by the total number of auctions. Figure 5 shows the linear relation between volatility and the percentage of times the 1-level seller wins. The two lines correspond to two "types" of volatility. The first line includes populations 1 through 5 (p1-p5). It reflects the case where the buyers' second-favorite (and possibly, the third, fourth, etc.) equilibrium price is greater than her most preferred price. In these cases the buyers and sellers fight among the two most preferred prices, the sellers pulling towards the higher equilibrium price and the buyers towards the lower one, as shown by the two peaks in populations 4 and 5 in Figure 4. The other line, which includes populations 6 and 7, corresponds to cases where the buyers' preferred equilibrium price is greater than the runner-ups. In these cases there is no contest between two equilibria. We observe only one peak in the price distribution for these populations. The slope of these lines can be easily calculated and the resulting function can be used by a seller agent for making a quantitative prediction as to how much he would benefit by switching to 1-level models. That is, he could measure price volatility, multiply it by the appropriate slope, and the resulting number would be the percentage of times he would win. However, for this to work the agent needs to know that all eight buyers and five sellers are 0-level modelers because different types of populations lead to different slopes. Also, slight changes in our learning parameters (.02 ≤ ǫ min ≤ .08 and .05 ≤ α min ≤ .2) lead to slight changes in the slopes so these would have to be taken into account if the agent is actively changing its parameters. We also want to make clear a small caveat, which is that the volatility that is correlated to the usefulness of keeping 1-level models is the volatility of the system with the agent already doing 1-level modeling. Fortunately, our experiments show that having one agent change from 0-level to 1-level does not have a great effect on the volatility as long as there are enough (i.e. more than five or so) other sellers. The reason volatility is such a good predictor is that it serves as an accurate assessment of how dynamic the system is and, in turn, of the complexity of the learning problem faced by the agents. It turns out that the learning problem faced by 1-level agents is "simpler" than the one faced by 0-level modelers. Our 0-level agents use reinforcement learning to learn a good match between world states and the actions they should take. The 1-level agents, on the other hand, can see the actions other agents take and do not need to learn their models through indirect reinforcements. They instead use a form of supervised learning to learn the models of others. Since 1-level agents need fewer interactions to learn a correct model, their models will, in general, be better than those of 0-level agents in direct proportion to the speed with which the target function changes. That is, in a slow-changing world both of them will have time enough to arrive at approximately correct models, while in a fast-changing world only the 1-level agents will have time to arrive at an approximately correct model. This explains why high price volatility is correlated to an increase in the 1-level agent's performance. However, as we saw, the relative advantages for different volatilities (i.e. the slope in Figure 5) will also depend on the shape of the price distribution and the particular population of agents. Finally, in all populations where the buyers are 0-level, we saw that it really pays for the sellers to have low costs because this allows them to lower their prices to fit almost any demand. Since the buyers have 0-level models, the sellers with low quality and cost can raise their prices when appropriate, in effect "pretending" to be the high-quality sellers, and make an even more substantial profit. This extra profit comes at the cost of a reduction in the average value that the buyers receive. In other words, the buyers get less value because they are only 0-level agents and are less able to detect the sellers' deception. In the next Section we will see how this is not true for 1-level buyers. Of course, the 1-level sellers were more successful at this deception strategy than the 0-level sellers. Figure 6 shows the profit of several agents in a population as a function of their cost. We can see how the 0-level agents' profit decreases with increasing costs, and how the 1-level agent's profit is much higher than the 0-level with the same costs. We also notice that, since the 0-level agents are not as successful as the 1-level at taking advantage of their low costs, the first 0-level seller (that returns quality 2) has lower profit than the rest as some of his profit was taken away by the 1-level seller (that returns the same quality). 1-level buyers and 0 and 1-level sellers. In these populations the buyers have the upper hand. They quickly identify those sellers that provide the highest quality goods and buy exclusively from them. The sellers do not benefit from having deeper models; in fact, Figure 7 shows how the 1-level seller's profit is less than that of a similar 0-level seller because the 1-level seller tries to charge higher prices than the 0-level seller. The 1-level buyers do not fall for this trick-they know what quality to expect, and buy more from the lower-priced 0-level seller(s). We have here a case of erroneous models-1-level sellers assume that buyers are 0-level, and since this is not true, their erroneous deductions lead them to make bad decisions. To stay a step ahead, sellers would need to be 2-level in this case. In Figure 7, the first population has all sellers returning a quality of 8 while by population 7 they are returning qualities of {8, 2, 3, 4, 5, 6, 7, 8}, respectively, with the 1-level always returning quality of 8. We notice that the difference in profits between the 0-level and the 1-level increases with successive populations. This is explained by the fact that in the first population all seven 0-level sellers are returning the same quality, while by population 7 only the 0-level pictured (i.e. the first one) is still returning quality 8. This means that his competition, in the form of other 0-level sellers returning the same quality, decreases for successive populations. Meanwhile, in all populations there is only one 1-level seller who has no competition from other 1-level sellers. To summarize, the 0-level seller's profit is always higher than the similar 1-level seller's, and the difference increases as there are fewer other competing 0-level sellers who offer the same quality. 1-level buyers and several 1-level sellers. We have shown how 1-level sellers do better, on average, than 0-level sellers when faced with 0-level buyers, but this is not true anymore if too many 0-level sellers decide to become 1-level. Figure 8 shows how the profits of a 1-level seller decrease as he is joined by other 1-level sellers. In this Figure the sellers are returning qualities of {2, 2, 2, 2, 2, 3, 4}. Initially they are all 0-level, then one of the sellers with quality 2 becomes 1-level (he is the seller shown in the Figure), then another one and so on. . . until there is only one 0-level seller with quality two. Then the seller with quality three becomes 1-level and, finally the seller with quality four becomes 1-level. At this point we have six 1-level sellers and one 0-level seller. We can see that with more than four 1-level sellers the 0-level seller is actually making more profit than the similar 1-level seller. The 1-level seller's profit decreases because, as more sellers change from 0 to 1-level, they are competing directly with him since they are offering the same quality and are the same level. Notice that the 1-level seller's curve flattens after four 1-level sellers are present in the population. The reason is that the next sellers to change over to 1-level return qualities of 3 and 4, respectively, so that they do not compete directly with the seller pictured. His profits, therefore, do not keep decreasing. For this test, and other similar tests, we had to use a population of sellers that produce different qualities because, as explained in Section 6.3, if they had returned the same quality then an equilibrium would have been reached which would prevent the 1-level sellers from making a significantly greater profit than the 0-level sellers. 6.6 1-level buyers and 1 and 2-level sellers. Assuming that the 2-level seller has perfect models of the other agents, we find that he wins an overwhelming percentage of the time. This is true, surprisingly enough, even when some of the 1-level sellers offer slightly higher quality goods. However, when the quality difference becomes too great (i.e. greater than 1), the buyers finally start to buy from the high quality 1level sellers. This case is very similar to the ones with 0-level buyers and 0 and 1-level sellers and we can start to discern a recurring pattern. In this case, however, it is much more computationally expensive to maintain 2-level models. On the other hand, since these 2-level models are perfect, they are better predictors than the 1-level, which explains why the 2-level seller wins much more than the 1-level seller from Section 6.3. Buyers Sellers Lessons 0-level 0-level Equilibrium reached only when all sellers offer the same quality. Otherwise, we get oscillations. Mean price increases when quality offered decreases. 0-level Any Sellers have big incentives to lower quality/cost. 0-level 0-level and 1-level seller beats others. one 1-level Quantitative advantage of being 1-level predicted by volatility and price distribution. 0-level 0-level and 1-level sellers do better, as long as there many 1-level are not too many of them. 1-level 0-level and Buyers have upper hand. They buy from the most one 1-level preferred seller. 1-level sellers are usually at a disadvantage. 1-level 1-level and Since 2-level has perfect models, it wins an one 2-level overwhelming percentage of time, except when it offers a rather lower quality. Table 2: Summary of lessons. In all cases the buyers had identical value and quality assessment functions. Sellers were constrained to always return the same quality. Conclusions We have presented a framework for the development of agents with incremental modeling/learning capabilities, in an economic society of agents. These agents were built, and the execution of different agent populations leads us to the discovery of the lessons summarized in Table 2. The discovery of volatility and price distributions as predictors of the benefits of deeper models will be very useful as guides for deciding how much modeling capability to build into an agent. This decision could either be done prior to development or, given enough information, it could be done at runtime. We are also encouraged by the fact that increasing the agents' capabilities changes the system in ways that we can recognize from our everyday economic experience. Some of the agent structures shown in this paper are already being implemented into the UMDL (Atkins et al., 1996). We have a basic economic infrastructure that allows agents to engage in commerce, and the agents use customizable heuristics for determining their strategic behavior. We are working on incorporating the more advanced modeling capabilities into our agents in order to enable more interesting strategic behaviors. Our results showed how sellers with deeper models fare better, in general, even when they produce less valuable goods. This means that we should expect those types of agents to, eventually, be added into the UMDL 6 . Fortunately, this advantage is diminished by having buyers keep deeper models. We expect that there will be a level at which the gains and costs associated with keeping deeper models balance out for each agent. Our hope is to provide a mechanism for agents to dynamically determine this cutoff and constantly adjust their behavior to maximize their expected profits given the current system behavior. The lessons in this paper are a significant step in this direction. We have seen that one needs to look at price volatility and at the modeling levels of the other agents to determine what modeling level will give the highest profits. We have also learned how buyers and sellers of different levels and offering different qualities lead to different system dynamics which, in turn, dictate whether the learning of nested models is useful or not. We are considering the expansion of the model with the possible additions of agents that can both buy and sell, and sellers that can return different quality goods. Allowing sellers to change the quality returned to fit the buyer will make them more competitive against 1-level buyers. We are also continuing tests on many different types of agent populations in the hopes of getting a better understanding of how well different agents fare in the different populations. In the long run, another offshoot of this research could be a better characterization of the types of environments and how they allow/inhibit "cheating" behavior in different agent populations. That is, we saw how, in our economic model, agents are sometimes rewarded for behavior that does not seem to be good for the community as a whole (e.g. when some of the sellers raised their price while lowering the quality they offered). The rewards, we are finding, start to diminish as the other agents become "smarter". We can intuit that the agents in these systems will eventually settle on some level of nesting that balances their costs of keeping nested models with their gains from taking better actions (Kauffman, 1994). It would be very useful to characterize the environments, agent populations, and types of "equilibria" that these might lead to, especially as interest in multi-agent systems grows.
7,648
1903.05238
2963943458
Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environments. Resulting grasps are visually realistic because hand is automatically fitted to the object shape from a position and orientation determined by the user using the VR handheld controllers (e.g. Oculus Touch motion controllers). Our approach is flexible because it can be adapted to different hand meshes (e.g. human or robotic hands) and it is also easily customizable. Moreover, it enables interaction with different objects regardless their geometries. In order to validate our proposal, an exhaustive qualitative and quantitative performance analysis has been carried out. On one hand, qualitative evaluation was used in the assessment of abstract aspects, such as motor control, finger movement realism, and interaction realism. On the other hand, for the quantitative evaluation a novel metric has been proposed to visually analyze the performed grips. Performance analysis results indicate that previous experience with our grasping system is not a prerequisite for an enjoyable, natural and intuitive VR interaction experience.
Grasping action is the most basic component of any interaction and it is composed of three major components @cite_21 . The first one is related to the process of approaching the arm and hand to the target object, considering the overall body movement. The second component focuses on the hand and body pre-shaping before the grasping action. Finally, the last component fits the hand to the geometry of the object by closing each of the fingers until contact is established.
{ "abstract": [ "Abstract This paper addresses the important issue of automating grasping movement in the animation of virtual actors, and presents a methodology and algorithm to generate realistic looking grasping motion of arbitrary shaped objects. A hybrid approach using both forward and inverse kinematics is proposed. A database of predefined body postures and hand trajectories are generalized to adapt to a specific grasp. The reachable space is divided into small subvolumes, which enables the construction of the database. The paper also addresses some common problems of articulated figure animation. A new approach for body positioning with kinematic constraints on both hands is described. An efficient and accurate manipulation of joint constraints is also presented. Finally, we describe an interpolation algorithm which interpolates between two postures of an articulated figure by moving the end effector along a specific trajectory and maintaining all the joint angles in the feasible range. Results are quite satisfactory, and some are shown in the paper." ], "cite_N": [ "@cite_21" ], "mid": [ "1999329153" ] }
A Visually Plausible Grasping System for Object Manipulation and Interaction in Virtual Reality Environments
W ITH the advent of affordable VR headsets such as Oculus VR/Go and HTC Vive, many works and projects are using virtual environments for different purposes. Most of VR applications are related to the entertainment industry (i.e. games and 3D cinema) or architectural visualizations, where virtual scene realism is a cornerstone. Currently existing VR systems are limited by their resolution, field-of-view, frame rate, and interaction among other technical specifications. In order to enhance user VR experience, developers are also focused on implementing rich interactions with the virtual environment, allowing the user to explore, interact and manipulate scene objects as in the real world. Interaction is a crucial feature for training/simulation applications (e.g. flight, driving and medical simulators), and also teleoperation (e.g. robotics), where the user ability to interact and explore the simulated environments is paramount for achieving an immersive experience. For this purpose, most of VR devices come with a pair of handheld controllers which are fully tracked in 3D space and specifically designed for interaction. One of the most basic interaction tasks is object grasping and manipulation. In order to achieve an enjoyable experience in VR, a realistic, flexible and real-time grasping system is needed. However, • Sergiu Oprea, Pablo M. Gonzalez, Alberto G. Garcia grasp synthesis in manipulation tasks is not straightforward because of the unlimited number of different hand configurations, the variety of object types and their geometries, and also due to the selection of the most suitable grasp for every different object in terms of realism, kinematics and physics. Currently existing real-time approaches in VR are purely animation-driven, completely relying on the animations realism. Moreover, these approaches are constrained to a limited number of simple object geometries and unable to deal with unknown objects. For every different object type and geometry, predefined animations are needed. This fact hinders the user experience, limiting its interaction capabilities. For a complete immersion user should be able to interact and manipulate different virtual objects as in the real world. In this paper, we propose a real-time grasping system for object interaction in virtual reality environments. We aim to achieve natural and visually plausible interactions in photorealistic environments rendered by Unreal Engine. Taking advantage of headset tracking and motion controllers, a human operator can be embodied in such environments as a virtual human or robot agent to freely navigate and interact with objects. Our grasping system is able to deal with different object geometries, without the need of a predefined grasp animation for each. With our approach, fingers are automatically fitted to object shape and geometry. We constrain hand finger phalanges motion checking in realtime for collisions with the object geometry. Our grasping system was analyzed both qualitatively and quantitatively. On one side, for the qualitative analysis, grasping system was implemented in a photorealistic envi-arXiv:1903.05238v1 [cs.GR] 12 Mar 2019 ronment where the user is freely able to interact with real world objects extracted from the YCB dataset [1] (see Figure 1). The qualitative evaluation is based on a questionnaire that will address the user interaction experience in terms of realism during object manipulation and interaction, system flexibility and usability, and general VR experience. On the other side, a quantitative grasping system analysis was carried out, contrasting the elapsed time a user needs in grasping an object and grasp quality based on a novel error metric which quantifies the overlapping between hand fingers and grasped object. From the quantitative evaluation, we obtain individual errors for the last two phalanges of each finger, the time user needed to grasp the object and also the contact points. This information alongside other provided by UnrealROX [2] such as depth mpas, instance segmentations, normal maps, 3D bounding boxes and 6D object pose (see Figure 8), enables different robotic applications as described in Section 6. In summary, we make the three following contributions: • We propose a real-time, realistic looking and flexible grasping system for natural interaction with arbitrary shaped objects in virtual reality environments; • We propose a novel metric and procedure to analyze visual grasp quality in VR interactions quantifying hand-object overlapping; • We provide the contact points extracted during the interaction in both local and global system coordinates. The rest of the paper is structured as follows. First of all, Section 2 analyzes the latest works related to object interaction and manipulation in virtual environments. The core of this work is comprised in Section 3 where our approach is described in detail. Then, the performance analysis, with the qualitative and our novel quantitative evaluations, is discussed in Section 4. Analysis results are reported in Section 5. Then, several applications are discussed in Section 6. After that, limitations of our approach are covered in Section 7 alongside several feature works. Finally, some conclusions are drawn in the last Section 8. Data-driven approaches Grasping data-driven approaches have existed since a long time ago [3]. These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between different grasp types. From this database, grasp poses are selected according with given object shape and geometry [6] [7]. Li et al. [6] construct a database with different hand poses and also object shapes and sizes. Despite having a good database, the process of hand poses selection is not straightforward since there can be multiple equally valid possibilities for the same gesture. To address this problem, Li et al. [6] proposed a shape-matching algorithm which returns multiple potential grasp poses. The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) [8] [9]. For the same purpose, Jorg et al. [10] studied the correlations between hand DOFs aiming to simplify hand models reducing DOF number. The results suggest to simplify hand models by reducing DOFs from 50 to 15 for both hands in conjunction without loosing relevant features. Hybrid data-driven approaches In order to achieve realistic object interactions, physical simulations on the objects should also be considered [11] [12] [13]. Moreover, hand and finger movement trajectories need to be both, kinematically and dynamically valid [14]. Pollard et al. [11] simulate hand interaction, such as two hands grasping each other in the handshake gesture. Bai et al. [13] simulate grasping an object, drop it on a specific spot on the palm and let it roll on the hand palm. A limitation of this approach is that information about the object must be known in advance, which disable robot to interact with unknown objects. By using an initial grasp pose and a desired object trajectory, the algorithm proposed by Liu [15] can generate physically-based hand manipulation poses varying the contact points with the object, grasping forces and also joint configurations. This approach works well for complex manipulations such as twist-opening a bottle. Ye and Liu [14] reconstruct a realistic hand motion and grasping generating feasible contact point trajectories. Selection of valid motions is defined as a randomized depthfirst tree traversal, where nodes are recursively expanded if they are kinematically and dynamically feasible. Otherwise, backtracking is performed in order to explore other possibilities. Virtual reality approaches This section is limited to virtual reality interaction using VR motion controllers, avoiding glove-based and bare-hand approaches. Implementation of the aforementioned techniques in virtual reality environments is a difficult task cause optimizations are needed to keep processes running in real time. Most of current existing approaches for flexible and realistic grasping are not suitable for real-time interaction. VR developers aim to create fast solutions with realistic and natural interactions. Recent approaches are directly related to the entertainment industry, i.e. video games. An excellent example is Lone Echo, a narrative adventure game which consists of manipulating tools and objects for solving puzzles. Hand animations are mostly procedurally generated, enabling grasping of complex geometries regardless their grasp angle. This approach [16] is based on a graph traversal heuristic which searches intersections between hand fingers and object surface mesh triangles. A* heuristic find the intersection that is nearest to the palm and also avoid invalid intersections. After calculating angles to make contact with each intersection point, highest angle is selected and fingers are rotated accordingly. Mostly implemented solutions in VR are animationbased [17] [18] [19]. These approaches are constrained to a limited number of simple object geometries and are unable to deal with unknown objects. Movements are predefined for concrete object geometries, hindering user interaction capabilities in the virtual environment. In [17], distance grab selection technique is implemented to enhance the user comfort when interacting in small play areas, while sitting or for grabbing objects on the floor. Grasping system is based on three trigger volumes attached to each hand: two small cylinders for short-range grasp, and a cone for long-range grabbing. Based on this approach, we used trigger volumes attached to finger phalanges to control its movement and detect object collisions more precisely. In this way we achieve a more flexible and visually plausible grasping system enhancing immersion and realism during interactions. GRASPING SYSTEM With the latest advances in rendering techniques, visualization of virtual reality (VR) environments is increasingly more photorealistic. Besides graphics, which are the cornerstone of most VR solutions, interaction is also an essential part to enhance the user experience and immersion. VR scene content is portrayed in a physically tangible way, inviting users to explore the environment, and interact or manipulate represented objects as in the real world. VR devices aim to provide very congruent means of primary interaction, described as a pair of handheld devices with very accurate 6D one-to-one tracking. The main purpose is to create rich interactions producing memorable and satisfying VR experiences. Most of the currently available VR solutions and games lack of a robust and natural object manipulation and interaction capabilities. This is because, bringing natural and intuitive interactions to VR is not straightforward, which makes VR development challenging at this stage. Interactions need to be in real-time and maintaining a high and solid frame rate, directly mapping user movement to VR input in order to avoid VR sickness (visual and vestibular mismatch). Maintaining the desired 90 frames per second (FPS) in a photorealistic scene alongside complex interactions is not straightforward. This indicates the need of a flexible grasping system designed to naturally and intuitively manipulate unknown objects of different geometries in real-time. Overview Our grasping approach was designed for real-time interaction and manipulation in virtual reality environments by providing a simple, modular, flexible, robust, and visually realistic grasping system. Its main features are described as follows: • Simple and modular: it can be easily integrated with other hand configurations. Its design is modular and adaptable to different hand skeletals and models. • Flexible: most of the currently available VR grasp solutions are purely animation-driven, thus limited to known geometries and unable to deal with previously unseen objects. In contrast, our grasping system is flexible as it allows interaction with unknown objects. In this way, the user can freely decide the object to interact with, without any restrictions. • Robust: unknown objects can have different geometries. However, our approach is able to adapt the virtual hand to objects, regardless of their shape. • Visually realistic: grasping action is fully controlled by the user, taking advantage of its previous experience and knowledge in grasping daily common realistic objects such as cans, cereal boxes, fruits, tools, etc. This makes resulting grasping visually realistic and natural just as a human would in real life. The combination of the above described features makes VR interaction a pleasant user experience, where object manipulation is smooth and intuitive. Our grasping works by detecting collisions with objects through the use of trigger actors placed experimentally on the finger phalanges. A trigger actor is a component from Unreal Engine 4 used for casting an event in response to an interaction, e.g. collision with another object. These components can be of different shapes, such as capsule, box, sphere, etc. In the Figure 2 capsule triggers are represented in green and palm sphere trigger in red. We experimentally placed two capsule triggers on the last two phalanges of each finger. We noticed that this configuration is the most effective in detecting objects collisions. Notice that collision detection is performed for each frame, so, for heavy configurations with many triggers, performance would be harmed. Components Our grasping system is composed of the components represented in the Figure 3. These components are defined as follows: • Object selection: selects the nearest object to the hand palm. Detection area is determined by the sphere Figure 2). The sphere trigger returns the world location of all the overlapped actors. As a result, the nearest actor can be determined by computing the distance from each overlapped actor to the center of the sphere trigger. Smallest distance will determine the nearest object, saving its reference for the other components. • Interaction manager: manages capsule triggers which are attached to finger phalanges as represented in Figure 2. If a capsule trigger reports an overlap event, the movement of its corresponding phalanx is blocked until hand is reopened or the overlapping with the manipulated object is over. The phalanx state (blocked or in movement) will be used as input to the grasping logic component. A phalanx is blocked if there is an overlap of the its corresponding capsule trigger with the manipulated object. • Finger movement: this component determines the movement of the fingers during the hand closing and opening animations. It ensures a smooth animation avoiding unexpected and unrealistic behavior in finger movement caused neither by a performance drop or other interaction issues. Basically, it monitors each variation in the rotation value of the phalanx. In the case of detecting an unexpected variation (i.e. big variation) during a frame change, missing intermediate values will be interpolated so as to keep finger movement smooth. • Grasping logic: this component manages when to grab or release an object. This decision is made based on the currently blocked phalanges determined with the interaction manager component. The object is grasped or released based on the following function: f (x) = true, if (th ph ∨ palm) ∧ (in ph ∨ mi ph ) f alse, otherwise(1) , where x = (th ph , in ph , mi ph , palm) is defined as th ph = thumb mid ∨ thumb dist in ph = index mid ∨ index dist mi ph = middle mid ∨ middle dist(2) Equation 1 determines when an object is grasped or released based on the inputs determined in Equation 2 where th ph , in ph , and mi ph , are the thumb, index and middle phalanges respectively. According to human hand morphology, mid and dist subscripts refer to the middle and distal phalanx (e.g. thumb dist references the distal phalanx of thumb finger and at the implementation level it is a boolean value). Implementation details Grasping system has been originally implemented in Unreal Engine 4 (UE4), however, it can be easily implemented in other engines such as Unity, which would also provide us with the necessary components for replicating the system (e.g. overlapping triggers). The implementation consists of UE4 blueprints and has been correctly structured in the components depicted in Figure 3 and described in the previous section. Implementation is available at Github 1 . PERFORMANCE ANALYSIS In order to validate our proposal, a complete performance analysis has been carried out. This analysis covers from a qualitative evaluation, which is prevalent in the assessment of VR systems, to a novel quantitative evaluation. Evaluation methods are briefly described as follows: • Qualitative evaluation: based on the user experience interacting with real objects from the YCB dataset in a photorealistic indoor scenario. Its purpose is to assess interaction realism, immersion, hand movement naturalness and other qualitative aspects described in Table 1 from the Subsection 4.1, which addresses qualitative evaluation in detail. • Quantitative evaluation: based on the grasping quality in terms of realism (i.e. how much it is visually plausible). We consider a visually plausible grasp when hand palm or fingers are level with the object surface, as in a real life grasping. However, when dealing with complex meshes, the collision detection precision can be significantly influenced. In this case, fingers could penetrate the object surface, or remain above its surface when a collision was detected earlier than expected. This would result in an unnatural and unrealistic grasp. To visually quantify grasping quality, we purpose a novel error metric based on computing the distance from each capsule trigger to the nearest contact point on the object surface. Quantitative evaluation and the proposed error metric are addressed in detail in Subsection 4.2. Qualitative evaluation Most VR experiments include qualitative and quantitative studies to measure its realism and immersion. Arguably, questionnaires are the default method to qualitatively assess any experience and the vast majority of works include them in one way or another [20] [21] [22]. However, one of the main problems with them is the absence of a standardized set of questions for different experiences that allows for 1. https://github.com/3dperceptionlab/unrealgrasp fair and easy comparisons. The different nature of the VR systems and experiences makes it challenging to find a set of evaluation questions that fits them all. Following the efforts of [23] towards a standardized embodiment questionnaire, we analyzed several works in the literature [24] [25] that included questionnaires to assess VR experiences to devise a standard one for virtual grasping systems. Inspired by such works, we have identified three main types of questions or aspects: • Motor Control: this aspect considers the movement of the virtual hands as a whole and its responsiveness to the virtual reality controllers. Hands should move naturally and their movements must be caused exactly by the controllers without unwanted movements and without limiting or restricting real movements to adapt to the virtual ones. • Finger Movement: this aspect takes the specific finger movement into account. Such movements must be natural and plausibly. Moreover, they must react properly to the user's intent. • Interaction Realism: this aspect is related to the interaction of the hand and fingers with objects. The questionnaire, shown in Table 1, is composed of fourteen questions related to the previously described aspects. Following [23], the users of the study will be pre- It seemed as if the virtual fingers were mine when grabbing an object Q10 I felt that grabbing objects was clumsy and hard to achieve Q11 It seemed as if finger movement were guided and unnatural Q12 I felt that grasps were visually correct and natural Q13 I felt that grasps were physically correct and natural Q14 It seemed that fingers were adapting properly to the different geometries sented with such questions right after the end of the experience in a randomized order to limit context effects. In addition, questions must be answered following the 7-point Likert-scale: (+3) strongly agree, (+2) agree, (+1) somewhat agree, (0) neutral, (-1) somewhat disagree, (-2) disagree, and (-3) strongly disagree. Results will be presented as a single embodiment score using the following equations: Motor Control = ((Q1 + Q2) − (Q3 + Q4))/4 Finger Movement Realism = (Q5 + Q6 + Q7)/3 Interaction Realism = ((Q8 + Q9) − (Q10 + Q11) + Q12 + Q13 + Q14)/7(3) , using the results of each individual aspect, we obtain the total embodiment score as follows: Score = (Motor Control + Finger Movement Realism + Interaction Realism * 2)/4 The interaction realism is the key aspect of this qualitative evaluation. So that, in the Equation 4 we emphasize this aspect by weighting it higher. Quantitative evaluation With the quantitative evaluation, we aim to evaluate grasping quality in terms of how much it is visually plausible or realistic. In other words, our purpose is to visually quantify our grasping performance, analyzing each finger position and how it fits the object mesh. When a collision is detected by a capsule trigger, we proceed with the calculation of the nearest distance between the finger phalanx surface (delimited by the capsule trigger) and the object mesh (see Equation 8). In Figure 4 the red capsules are representing 3D sphere tracing volumes which provide information of the nearest collision from the trace starting point to the first contact point on the object surface which intersects the sphere volume. For each finger phalanx with an attached capsule trigger represented in green, we throw a sphere trace obtaining the nearest contact points on the object surface represented as lime colored dots (impact point, Ip). In this representation, the total error for the index finger would be the average of the sum of the distances in millimeters between the surface of each phalanx and the nearest contact point on the object surface (see Equation 9). The nearest distance computation is approximated by an equation that was developed to find the distance between the impact point, and the plane that contains the capsule trigger center point and is perpendicular to the longitudinal axis of the red capsule. Capsule triggers centers are located on the surface of the hand mesh, so this computation should approximate the nearest distance to the mesh well enough, without being computationally too demanding. To compute this distance, we define the following vectors from the three input points (the starting point of the red capsule, the impact point and the capsule trigger center point): − − → D Ip = Ip − Sp − −− → D CT c = CT c − Sp(5) where − − → D Ip is the vector from the starting point to the impact point, and − −− → D CT c vector represents the direction of the longitudinal axis of the red capsule. They are represented in navy blue and purple respectively in Figure 4. Then, we find the cosine of the angle they form through their dot product: − − → D Ip · − −− → D CT c = | − − → D Ip | * | − −− → D CT c | * cos(β) cos(β) = − − → D Ip · − −− → D CT c | − − → D Ip | * | − −− → D CT c |(6) We can now substitute that cosine when computing the projection of − − → D Ip over the longitudinal axis of the red capsule ( − − → D P r in Figure 4): | − − → D P r | = cos(β) * | − − → D Ip | | − − → D P r | = − − → D Ip · − −− → D CT c | − −− → D CT c | * | − − → D Ip | * | − − → D Ip | | − − → D P r | = − − → D Ip · − −− → D CT c | − −− → D CT c |(7) Having that module, we only have to subtract | − −− → D CT c | in order to obtain the desired distance: N D(Ip, Sp, CT c) = − − → D Ip · − −− → D CT c | − −− → D CT c | − | − −− → D CT c | N D(Ip, Sp, CT c) = − −−−− → Ip − Sp · − −−−−−− → CT c − Sp | − −−−−−− → CT c − Sp| − | − −−−−−− → CT c − Sp|(8) Computing the distance like this, with this final subtraction, allows to obtain a positive distance when impact point is outside the hand mesh, and a negative one if it is inside. We compute the nearest distance per each capsule trigger attached to a finger phalanx. As stated before, if the distance is negative, this indicates a finger penetration issue on the object surface. Otherwise, if distance is positive, it means that finger stopped above the object surface. The ideal case is when a zero distance is obtained, that is, the finger is perfectly situated on the object surface. The total error for the hand is represented by the following equation: HandError = N F ingers i=1 N CT F j=1 |N D(Ip ij , Sp ij , CT c ij )| N CapsuleT riggersP erF inger(9) Dataset To benchmark our grasping system we used a set of objects that are frequently used in daily life, such as, food items (e.g. cracker box, cans, box of sugar, fruits, etc.), tool items (e.g. power drill, hammer, screwdrivers, etc.), kitchen items (e.g. eating utensils) and also spherical shaped objects (e.g. tennis ball, racquetball, golf ball, etc.). Yale-CMU-Berkeley (YCB) Object and Model set [1] provides us these reallife 3D textured models scanned with outstanding accuracy and detail. Available objects have a wide variety of shapes, textures and sizes as we can see in Figure 5. The advantage of using real life objects is that the user already has a previous experience manipulating similar objects so he will try to grab and interact with the objects in the same way. Participants For the performance analysis, we recruited ten participants (8M/2F) from the local campus. Four of them have experience with VR applications. The rest are inexperienced virtual reality users. Participants will take part on both qualitative and quantitative evaluation. The performance analysis procedure will be described in the following subsection, indicating the concrete tasks to be performed by each participant. Procedure The system performance analysis begins with the quantitative evaluation. In this first phase, the user will be embodied in a controlled scenario 2 where 30 different objects will be spawned in a delimited area, with random orientation, and in the same order as represented in Figure 5. The user will try to grasp the object as he would do in real life and as quickly as possible. For each grasping, the system will compute the error metric and will also store the time spent by the user in grasping the object. The purpose of this first phase is to visually analyze grasping quality which is directly related to user expertise in VR environments and concretely with our grasping system. An experienced user would know system limits both when interacting with complex geometries or with large objects that would make it difficult to perform the grasp action quickly and naturally. For the qualitative evaluation, the same user will be embodied in a photorealistic scenario changing mannequin hands by human hand model with realistic textures. After interacting freely in the photorealistic virtual environment 3 , the user will have to answer the evaluation questionnaire defined in Table 1. The main purpose is the evaluation of interaction realism, finger and hand movement naturalness and motor control, among other qualitative aspects regarding user experience in VR environments. RESULTS AND DISCUSSION In this section we will discuss and analyze the results obtained from the performance analysis process. On the one hand, we will draw conclusions from the average error obtained in grasping each object by each participant group, and also from the overall error per object taking into account all the participants (see Figure 7). On the other hand, we obtained the average elapsed time needed in grasping each object for each participant group, and also the average elapsed time needed for each object taking into account all the participants (see Figure 6). This will allow us to draw conclusions about the most difficult objects to manipulate in terms of accuracy and elapsed time for grasping. Moreover, we can compare system performance used by inexperienced users in comparison with experienced ones. Qualitative evaluation Qualitative evaluation for each participant was calculated using the Equation 3 obtaining a score for each qualitative 2. https://youtu.be/4sPhLbHpywM 3. https://youtu.be/65gdFdwsTVg aspect. In Table 2 we represent for each group of participants: the average score for each evaluation aspect and the total embodiment score computed using the Equation 4. Regarding the represented results in Table 2 of experienced users has been more disadvantageous as they have a more elaborated criterion given their previous experience with virtual reality applications. Finger movement realism (aspect 2) was evaluated similarly by both groups. This is because the hand closing and opening gestures are guided by the same animation in both cases. Finally, the reported results referring to the interaction realism have been the lowest in both cases. This is mostly because users cannot control their individual fingers movement, since general hand gesture is controlled by a unique trigger button of the controller. However, overall obtained embodiment score is 2.08 out of 3.0. Quantitative evaluation As expected, inexperienced users have taken longer to grasp almost all the object set due to the lack of practice and expertise with the system. This is clearly represented in Figure 6 where experienced users only have taken longer in grasping some tools such as, the flat screwdriver ( Figure 5z) and hammer (Figure 5aa). Inexperienced users take an average of 0.36 seconds longer to grab the objects. In practice and regarding interaction, this is not a factor that is going to make a crucial difference. Analyzing Figure 6, the tuna fish can (Figure 5f), potted meat can (Figure 5h), spatula (Figure 5u), toy airplane ( Figure 5ad) and bleach cleaner (Figure 5q) are the most time consuming when grasped by the users. This is mainly because of their sizes and complex geometries. Since objects are spawned with a random orientation, this fact can affect grasping times. Even so, we can conclude that the largest objects are those that the user takes the longest to grasp. Regarding Figure 7 we can observe that the errors obtained by both groups of participants are quite similar. Most significant differences were observed in the case of power drill (Figure 5v) and the spatula. The power drill has a complex geometry and its size also hinders its grasp as the same as spatula and toy airplane. Analyzing the overall error in the Figure 7, we conclude that the largest objects, such as the toy airplane, power drill, and bleach cleaner are those reporting most error rate. In addition, we observe how overall error decreases from the first objects to the last ones. This is mainly because, user skills and expertise with the grasping system are improving progressively. Moreover, results refer to a steep learning curve. APPLICATIONS Our grasping system can be applied to several existing problems in different areas of interest, such as: robotics [26], rehabilitation [27] and interaction using augmented reality [28]. In robotics, different works have been explored to implement robust grasp approaches that allow robots to interact with the environment. These contributions are organized in mainly four different blocks [29]: methods that rely on known objects and previously estimated grasp points [30], grasping methods for familiar objects [31], methods for unknown objects based on the analysis of object geometry [32] and automatic learning approaches [33]. Our approach is more closely related to this last block, where its use would potentially be a relevant contribution. As a direct application, our system enables human-robot knowledge transfer where robots try to imitate human behaviour in performing grasping. Our grasping system is also useful for rehabilitation of patients with hand motor difficulties, and it could even be done remotely assisted by an expert [34], or through an automatic system [35]. Several works have demonstrated the viability of patient rehabilitation in virtual environments [27], helping them to improve the mobility of their hands in daily tasks [36]. Our novel error metric in combination with other automatic learning methods, can be used to guide patients during rehabilitation with feedback information and instructions. This will make rehabilitation a more attractive process, by quantifying the patient progress and visualizing its improvements over the duration of rehabilitation. Finally, our grasping system integrated in UnrealROX [2] enables many other computer vision and artificial intelligence applications by providing synthetic ground truth data, such as depth and normal maps, object masks, trajectories, stereo pairs, etc. of the virtual human hands interacting with real objects from the YCB dataset ( Figure 8). Hand movement is based on a single animation regardless object geometry. Depending on the object shape we could vary grasping gesture: sphericalgrasping, cylindrical-grasping, finger-pinch, keypinch, etc. However, our grasping gesture was experimentally the best when dealing with different shaped objects. LIMITATIONS AND FUTURE WORKS • The object can be grasped with only one hand. The user can interact with different objects using both hands at the same time. But not the same object with both hands. • Sometimes it is difficult to deal with large objects due to the initial hand posture or because objects slide out from the hand palm due to physical collisions. Experienced users can better deal with this problem. As future work, and in order to improve our grasping system, we could vary the hand grip gesture according to the object geometry we are manipulating. This is finding a correspondence between object geometry and a simple shape, e.g. a tennis ball is similar to a sphere thus proceeding with a spherical grasp movement. At the application level, there are several possibilities as we discussed in the previ-ous section. However, we would like to emphasize the use of contact points obtained when grasping an object in virtual reality, to transfer that knowledge and human behavior to real robots. CONCLUSION This work proposes a flexible and realistic looking grasping system which enables smooth and real-time interaction in virtual reality environments with arbitrary shaped objects. This approach is unconstrained by the object geometry, it is fully controlled by the user and it is modular and easily implemented on different meshes or skeletal configurations. In order to validate our approach, an exhaustive evaluation process was carried out. Our system was evaluated qualitatively and quantitatively by two groups of participants: with previous experience in virtual reality environments (experienced users) and without expertise in VR (inexperienced). For the quantitative evaluation, a new error metric has been proposed to evaluate each grasp, quantifying hand-object overlapping. From the performance analysis results, we conclude that user overall experience was satisfactory and positive. Analyzing the quantitative evaluation, the error difference between experienced users and non experienced is subtle. Moreover, average errors are progressively smaller as more object are grasped. This clearly indicates a steep learning curve. In addition, the qualitative analysis points to a natural and realistic interaction. Users can freely manipulate previously defined dynamic objects in the photorealistic environment. Moreover, grasping contact points can be easily extracted, thus enabling numerous applications, especially in the field of robotics. Unreal Engine 4 project source code is available at GitHub alongside several video demonstrations. This approach can easily be implemented on different game engines.
5,795
1903.05238
2963943458
Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environments. Resulting grasps are visually realistic because hand is automatically fitted to the object shape from a position and orientation determined by the user using the VR handheld controllers (e.g. Oculus Touch motion controllers). Our approach is flexible because it can be adapted to different hand meshes (e.g. human or robotic hands) and it is also easily customizable. Moreover, it enables interaction with different objects regardless their geometries. In order to validate our proposal, an exhaustive qualitative and quantitative performance analysis has been carried out. On one hand, qualitative evaluation was used in the assessment of abstract aspects, such as motor control, finger movement realism, and interaction realism. On the other hand, for the quantitative evaluation a novel metric has been proposed to visually analyze the performed grips. Performance analysis results indicate that previous experience with our grasping system is not a prerequisite for an enjoyable, natural and intuitive VR interaction experience.
Grasping data-driven approaches have existed since a long time ago @cite_21 . These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between different grasp types.
{ "abstract": [ "Abstract This paper addresses the important issue of automating grasping movement in the animation of virtual actors, and presents a methodology and algorithm to generate realistic looking grasping motion of arbitrary shaped objects. A hybrid approach using both forward and inverse kinematics is proposed. A database of predefined body postures and hand trajectories are generalized to adapt to a specific grasp. The reachable space is divided into small subvolumes, which enables the construction of the database. The paper also addresses some common problems of articulated figure animation. A new approach for body positioning with kinematic constraints on both hands is described. An efficient and accurate manipulation of joint constraints is also presented. Finally, we describe an interpolation algorithm which interpolates between two postures of an articulated figure by moving the end effector along a specific trajectory and maintaining all the joint angles in the feasible range. Results are quite satisfactory, and some are shown in the paper." ], "cite_N": [ "@cite_21" ], "mid": [ "1999329153" ] }
A Visually Plausible Grasping System for Object Manipulation and Interaction in Virtual Reality Environments
W ITH the advent of affordable VR headsets such as Oculus VR/Go and HTC Vive, many works and projects are using virtual environments for different purposes. Most of VR applications are related to the entertainment industry (i.e. games and 3D cinema) or architectural visualizations, where virtual scene realism is a cornerstone. Currently existing VR systems are limited by their resolution, field-of-view, frame rate, and interaction among other technical specifications. In order to enhance user VR experience, developers are also focused on implementing rich interactions with the virtual environment, allowing the user to explore, interact and manipulate scene objects as in the real world. Interaction is a crucial feature for training/simulation applications (e.g. flight, driving and medical simulators), and also teleoperation (e.g. robotics), where the user ability to interact and explore the simulated environments is paramount for achieving an immersive experience. For this purpose, most of VR devices come with a pair of handheld controllers which are fully tracked in 3D space and specifically designed for interaction. One of the most basic interaction tasks is object grasping and manipulation. In order to achieve an enjoyable experience in VR, a realistic, flexible and real-time grasping system is needed. However, • Sergiu Oprea, Pablo M. Gonzalez, Alberto G. Garcia grasp synthesis in manipulation tasks is not straightforward because of the unlimited number of different hand configurations, the variety of object types and their geometries, and also due to the selection of the most suitable grasp for every different object in terms of realism, kinematics and physics. Currently existing real-time approaches in VR are purely animation-driven, completely relying on the animations realism. Moreover, these approaches are constrained to a limited number of simple object geometries and unable to deal with unknown objects. For every different object type and geometry, predefined animations are needed. This fact hinders the user experience, limiting its interaction capabilities. For a complete immersion user should be able to interact and manipulate different virtual objects as in the real world. In this paper, we propose a real-time grasping system for object interaction in virtual reality environments. We aim to achieve natural and visually plausible interactions in photorealistic environments rendered by Unreal Engine. Taking advantage of headset tracking and motion controllers, a human operator can be embodied in such environments as a virtual human or robot agent to freely navigate and interact with objects. Our grasping system is able to deal with different object geometries, without the need of a predefined grasp animation for each. With our approach, fingers are automatically fitted to object shape and geometry. We constrain hand finger phalanges motion checking in realtime for collisions with the object geometry. Our grasping system was analyzed both qualitatively and quantitatively. On one side, for the qualitative analysis, grasping system was implemented in a photorealistic envi-arXiv:1903.05238v1 [cs.GR] 12 Mar 2019 ronment where the user is freely able to interact with real world objects extracted from the YCB dataset [1] (see Figure 1). The qualitative evaluation is based on a questionnaire that will address the user interaction experience in terms of realism during object manipulation and interaction, system flexibility and usability, and general VR experience. On the other side, a quantitative grasping system analysis was carried out, contrasting the elapsed time a user needs in grasping an object and grasp quality based on a novel error metric which quantifies the overlapping between hand fingers and grasped object. From the quantitative evaluation, we obtain individual errors for the last two phalanges of each finger, the time user needed to grasp the object and also the contact points. This information alongside other provided by UnrealROX [2] such as depth mpas, instance segmentations, normal maps, 3D bounding boxes and 6D object pose (see Figure 8), enables different robotic applications as described in Section 6. In summary, we make the three following contributions: • We propose a real-time, realistic looking and flexible grasping system for natural interaction with arbitrary shaped objects in virtual reality environments; • We propose a novel metric and procedure to analyze visual grasp quality in VR interactions quantifying hand-object overlapping; • We provide the contact points extracted during the interaction in both local and global system coordinates. The rest of the paper is structured as follows. First of all, Section 2 analyzes the latest works related to object interaction and manipulation in virtual environments. The core of this work is comprised in Section 3 where our approach is described in detail. Then, the performance analysis, with the qualitative and our novel quantitative evaluations, is discussed in Section 4. Analysis results are reported in Section 5. Then, several applications are discussed in Section 6. After that, limitations of our approach are covered in Section 7 alongside several feature works. Finally, some conclusions are drawn in the last Section 8. Data-driven approaches Grasping data-driven approaches have existed since a long time ago [3]. These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between different grasp types. From this database, grasp poses are selected according with given object shape and geometry [6] [7]. Li et al. [6] construct a database with different hand poses and also object shapes and sizes. Despite having a good database, the process of hand poses selection is not straightforward since there can be multiple equally valid possibilities for the same gesture. To address this problem, Li et al. [6] proposed a shape-matching algorithm which returns multiple potential grasp poses. The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) [8] [9]. For the same purpose, Jorg et al. [10] studied the correlations between hand DOFs aiming to simplify hand models reducing DOF number. The results suggest to simplify hand models by reducing DOFs from 50 to 15 for both hands in conjunction without loosing relevant features. Hybrid data-driven approaches In order to achieve realistic object interactions, physical simulations on the objects should also be considered [11] [12] [13]. Moreover, hand and finger movement trajectories need to be both, kinematically and dynamically valid [14]. Pollard et al. [11] simulate hand interaction, such as two hands grasping each other in the handshake gesture. Bai et al. [13] simulate grasping an object, drop it on a specific spot on the palm and let it roll on the hand palm. A limitation of this approach is that information about the object must be known in advance, which disable robot to interact with unknown objects. By using an initial grasp pose and a desired object trajectory, the algorithm proposed by Liu [15] can generate physically-based hand manipulation poses varying the contact points with the object, grasping forces and also joint configurations. This approach works well for complex manipulations such as twist-opening a bottle. Ye and Liu [14] reconstruct a realistic hand motion and grasping generating feasible contact point trajectories. Selection of valid motions is defined as a randomized depthfirst tree traversal, where nodes are recursively expanded if they are kinematically and dynamically feasible. Otherwise, backtracking is performed in order to explore other possibilities. Virtual reality approaches This section is limited to virtual reality interaction using VR motion controllers, avoiding glove-based and bare-hand approaches. Implementation of the aforementioned techniques in virtual reality environments is a difficult task cause optimizations are needed to keep processes running in real time. Most of current existing approaches for flexible and realistic grasping are not suitable for real-time interaction. VR developers aim to create fast solutions with realistic and natural interactions. Recent approaches are directly related to the entertainment industry, i.e. video games. An excellent example is Lone Echo, a narrative adventure game which consists of manipulating tools and objects for solving puzzles. Hand animations are mostly procedurally generated, enabling grasping of complex geometries regardless their grasp angle. This approach [16] is based on a graph traversal heuristic which searches intersections between hand fingers and object surface mesh triangles. A* heuristic find the intersection that is nearest to the palm and also avoid invalid intersections. After calculating angles to make contact with each intersection point, highest angle is selected and fingers are rotated accordingly. Mostly implemented solutions in VR are animationbased [17] [18] [19]. These approaches are constrained to a limited number of simple object geometries and are unable to deal with unknown objects. Movements are predefined for concrete object geometries, hindering user interaction capabilities in the virtual environment. In [17], distance grab selection technique is implemented to enhance the user comfort when interacting in small play areas, while sitting or for grabbing objects on the floor. Grasping system is based on three trigger volumes attached to each hand: two small cylinders for short-range grasp, and a cone for long-range grabbing. Based on this approach, we used trigger volumes attached to finger phalanges to control its movement and detect object collisions more precisely. In this way we achieve a more flexible and visually plausible grasping system enhancing immersion and realism during interactions. GRASPING SYSTEM With the latest advances in rendering techniques, visualization of virtual reality (VR) environments is increasingly more photorealistic. Besides graphics, which are the cornerstone of most VR solutions, interaction is also an essential part to enhance the user experience and immersion. VR scene content is portrayed in a physically tangible way, inviting users to explore the environment, and interact or manipulate represented objects as in the real world. VR devices aim to provide very congruent means of primary interaction, described as a pair of handheld devices with very accurate 6D one-to-one tracking. The main purpose is to create rich interactions producing memorable and satisfying VR experiences. Most of the currently available VR solutions and games lack of a robust and natural object manipulation and interaction capabilities. This is because, bringing natural and intuitive interactions to VR is not straightforward, which makes VR development challenging at this stage. Interactions need to be in real-time and maintaining a high and solid frame rate, directly mapping user movement to VR input in order to avoid VR sickness (visual and vestibular mismatch). Maintaining the desired 90 frames per second (FPS) in a photorealistic scene alongside complex interactions is not straightforward. This indicates the need of a flexible grasping system designed to naturally and intuitively manipulate unknown objects of different geometries in real-time. Overview Our grasping approach was designed for real-time interaction and manipulation in virtual reality environments by providing a simple, modular, flexible, robust, and visually realistic grasping system. Its main features are described as follows: • Simple and modular: it can be easily integrated with other hand configurations. Its design is modular and adaptable to different hand skeletals and models. • Flexible: most of the currently available VR grasp solutions are purely animation-driven, thus limited to known geometries and unable to deal with previously unseen objects. In contrast, our grasping system is flexible as it allows interaction with unknown objects. In this way, the user can freely decide the object to interact with, without any restrictions. • Robust: unknown objects can have different geometries. However, our approach is able to adapt the virtual hand to objects, regardless of their shape. • Visually realistic: grasping action is fully controlled by the user, taking advantage of its previous experience and knowledge in grasping daily common realistic objects such as cans, cereal boxes, fruits, tools, etc. This makes resulting grasping visually realistic and natural just as a human would in real life. The combination of the above described features makes VR interaction a pleasant user experience, where object manipulation is smooth and intuitive. Our grasping works by detecting collisions with objects through the use of trigger actors placed experimentally on the finger phalanges. A trigger actor is a component from Unreal Engine 4 used for casting an event in response to an interaction, e.g. collision with another object. These components can be of different shapes, such as capsule, box, sphere, etc. In the Figure 2 capsule triggers are represented in green and palm sphere trigger in red. We experimentally placed two capsule triggers on the last two phalanges of each finger. We noticed that this configuration is the most effective in detecting objects collisions. Notice that collision detection is performed for each frame, so, for heavy configurations with many triggers, performance would be harmed. Components Our grasping system is composed of the components represented in the Figure 3. These components are defined as follows: • Object selection: selects the nearest object to the hand palm. Detection area is determined by the sphere Figure 2). The sphere trigger returns the world location of all the overlapped actors. As a result, the nearest actor can be determined by computing the distance from each overlapped actor to the center of the sphere trigger. Smallest distance will determine the nearest object, saving its reference for the other components. • Interaction manager: manages capsule triggers which are attached to finger phalanges as represented in Figure 2. If a capsule trigger reports an overlap event, the movement of its corresponding phalanx is blocked until hand is reopened or the overlapping with the manipulated object is over. The phalanx state (blocked or in movement) will be used as input to the grasping logic component. A phalanx is blocked if there is an overlap of the its corresponding capsule trigger with the manipulated object. • Finger movement: this component determines the movement of the fingers during the hand closing and opening animations. It ensures a smooth animation avoiding unexpected and unrealistic behavior in finger movement caused neither by a performance drop or other interaction issues. Basically, it monitors each variation in the rotation value of the phalanx. In the case of detecting an unexpected variation (i.e. big variation) during a frame change, missing intermediate values will be interpolated so as to keep finger movement smooth. • Grasping logic: this component manages when to grab or release an object. This decision is made based on the currently blocked phalanges determined with the interaction manager component. The object is grasped or released based on the following function: f (x) = true, if (th ph ∨ palm) ∧ (in ph ∨ mi ph ) f alse, otherwise(1) , where x = (th ph , in ph , mi ph , palm) is defined as th ph = thumb mid ∨ thumb dist in ph = index mid ∨ index dist mi ph = middle mid ∨ middle dist(2) Equation 1 determines when an object is grasped or released based on the inputs determined in Equation 2 where th ph , in ph , and mi ph , are the thumb, index and middle phalanges respectively. According to human hand morphology, mid and dist subscripts refer to the middle and distal phalanx (e.g. thumb dist references the distal phalanx of thumb finger and at the implementation level it is a boolean value). Implementation details Grasping system has been originally implemented in Unreal Engine 4 (UE4), however, it can be easily implemented in other engines such as Unity, which would also provide us with the necessary components for replicating the system (e.g. overlapping triggers). The implementation consists of UE4 blueprints and has been correctly structured in the components depicted in Figure 3 and described in the previous section. Implementation is available at Github 1 . PERFORMANCE ANALYSIS In order to validate our proposal, a complete performance analysis has been carried out. This analysis covers from a qualitative evaluation, which is prevalent in the assessment of VR systems, to a novel quantitative evaluation. Evaluation methods are briefly described as follows: • Qualitative evaluation: based on the user experience interacting with real objects from the YCB dataset in a photorealistic indoor scenario. Its purpose is to assess interaction realism, immersion, hand movement naturalness and other qualitative aspects described in Table 1 from the Subsection 4.1, which addresses qualitative evaluation in detail. • Quantitative evaluation: based on the grasping quality in terms of realism (i.e. how much it is visually plausible). We consider a visually plausible grasp when hand palm or fingers are level with the object surface, as in a real life grasping. However, when dealing with complex meshes, the collision detection precision can be significantly influenced. In this case, fingers could penetrate the object surface, or remain above its surface when a collision was detected earlier than expected. This would result in an unnatural and unrealistic grasp. To visually quantify grasping quality, we purpose a novel error metric based on computing the distance from each capsule trigger to the nearest contact point on the object surface. Quantitative evaluation and the proposed error metric are addressed in detail in Subsection 4.2. Qualitative evaluation Most VR experiments include qualitative and quantitative studies to measure its realism and immersion. Arguably, questionnaires are the default method to qualitatively assess any experience and the vast majority of works include them in one way or another [20] [21] [22]. However, one of the main problems with them is the absence of a standardized set of questions for different experiences that allows for 1. https://github.com/3dperceptionlab/unrealgrasp fair and easy comparisons. The different nature of the VR systems and experiences makes it challenging to find a set of evaluation questions that fits them all. Following the efforts of [23] towards a standardized embodiment questionnaire, we analyzed several works in the literature [24] [25] that included questionnaires to assess VR experiences to devise a standard one for virtual grasping systems. Inspired by such works, we have identified three main types of questions or aspects: • Motor Control: this aspect considers the movement of the virtual hands as a whole and its responsiveness to the virtual reality controllers. Hands should move naturally and their movements must be caused exactly by the controllers without unwanted movements and without limiting or restricting real movements to adapt to the virtual ones. • Finger Movement: this aspect takes the specific finger movement into account. Such movements must be natural and plausibly. Moreover, they must react properly to the user's intent. • Interaction Realism: this aspect is related to the interaction of the hand and fingers with objects. The questionnaire, shown in Table 1, is composed of fourteen questions related to the previously described aspects. Following [23], the users of the study will be pre- It seemed as if the virtual fingers were mine when grabbing an object Q10 I felt that grabbing objects was clumsy and hard to achieve Q11 It seemed as if finger movement were guided and unnatural Q12 I felt that grasps were visually correct and natural Q13 I felt that grasps were physically correct and natural Q14 It seemed that fingers were adapting properly to the different geometries sented with such questions right after the end of the experience in a randomized order to limit context effects. In addition, questions must be answered following the 7-point Likert-scale: (+3) strongly agree, (+2) agree, (+1) somewhat agree, (0) neutral, (-1) somewhat disagree, (-2) disagree, and (-3) strongly disagree. Results will be presented as a single embodiment score using the following equations: Motor Control = ((Q1 + Q2) − (Q3 + Q4))/4 Finger Movement Realism = (Q5 + Q6 + Q7)/3 Interaction Realism = ((Q8 + Q9) − (Q10 + Q11) + Q12 + Q13 + Q14)/7(3) , using the results of each individual aspect, we obtain the total embodiment score as follows: Score = (Motor Control + Finger Movement Realism + Interaction Realism * 2)/4 The interaction realism is the key aspect of this qualitative evaluation. So that, in the Equation 4 we emphasize this aspect by weighting it higher. Quantitative evaluation With the quantitative evaluation, we aim to evaluate grasping quality in terms of how much it is visually plausible or realistic. In other words, our purpose is to visually quantify our grasping performance, analyzing each finger position and how it fits the object mesh. When a collision is detected by a capsule trigger, we proceed with the calculation of the nearest distance between the finger phalanx surface (delimited by the capsule trigger) and the object mesh (see Equation 8). In Figure 4 the red capsules are representing 3D sphere tracing volumes which provide information of the nearest collision from the trace starting point to the first contact point on the object surface which intersects the sphere volume. For each finger phalanx with an attached capsule trigger represented in green, we throw a sphere trace obtaining the nearest contact points on the object surface represented as lime colored dots (impact point, Ip). In this representation, the total error for the index finger would be the average of the sum of the distances in millimeters between the surface of each phalanx and the nearest contact point on the object surface (see Equation 9). The nearest distance computation is approximated by an equation that was developed to find the distance between the impact point, and the plane that contains the capsule trigger center point and is perpendicular to the longitudinal axis of the red capsule. Capsule triggers centers are located on the surface of the hand mesh, so this computation should approximate the nearest distance to the mesh well enough, without being computationally too demanding. To compute this distance, we define the following vectors from the three input points (the starting point of the red capsule, the impact point and the capsule trigger center point): − − → D Ip = Ip − Sp − −− → D CT c = CT c − Sp(5) where − − → D Ip is the vector from the starting point to the impact point, and − −− → D CT c vector represents the direction of the longitudinal axis of the red capsule. They are represented in navy blue and purple respectively in Figure 4. Then, we find the cosine of the angle they form through their dot product: − − → D Ip · − −− → D CT c = | − − → D Ip | * | − −− → D CT c | * cos(β) cos(β) = − − → D Ip · − −− → D CT c | − − → D Ip | * | − −− → D CT c |(6) We can now substitute that cosine when computing the projection of − − → D Ip over the longitudinal axis of the red capsule ( − − → D P r in Figure 4): | − − → D P r | = cos(β) * | − − → D Ip | | − − → D P r | = − − → D Ip · − −− → D CT c | − −− → D CT c | * | − − → D Ip | * | − − → D Ip | | − − → D P r | = − − → D Ip · − −− → D CT c | − −− → D CT c |(7) Having that module, we only have to subtract | − −− → D CT c | in order to obtain the desired distance: N D(Ip, Sp, CT c) = − − → D Ip · − −− → D CT c | − −− → D CT c | − | − −− → D CT c | N D(Ip, Sp, CT c) = − −−−− → Ip − Sp · − −−−−−− → CT c − Sp | − −−−−−− → CT c − Sp| − | − −−−−−− → CT c − Sp|(8) Computing the distance like this, with this final subtraction, allows to obtain a positive distance when impact point is outside the hand mesh, and a negative one if it is inside. We compute the nearest distance per each capsule trigger attached to a finger phalanx. As stated before, if the distance is negative, this indicates a finger penetration issue on the object surface. Otherwise, if distance is positive, it means that finger stopped above the object surface. The ideal case is when a zero distance is obtained, that is, the finger is perfectly situated on the object surface. The total error for the hand is represented by the following equation: HandError = N F ingers i=1 N CT F j=1 |N D(Ip ij , Sp ij , CT c ij )| N CapsuleT riggersP erF inger(9) Dataset To benchmark our grasping system we used a set of objects that are frequently used in daily life, such as, food items (e.g. cracker box, cans, box of sugar, fruits, etc.), tool items (e.g. power drill, hammer, screwdrivers, etc.), kitchen items (e.g. eating utensils) and also spherical shaped objects (e.g. tennis ball, racquetball, golf ball, etc.). Yale-CMU-Berkeley (YCB) Object and Model set [1] provides us these reallife 3D textured models scanned with outstanding accuracy and detail. Available objects have a wide variety of shapes, textures and sizes as we can see in Figure 5. The advantage of using real life objects is that the user already has a previous experience manipulating similar objects so he will try to grab and interact with the objects in the same way. Participants For the performance analysis, we recruited ten participants (8M/2F) from the local campus. Four of them have experience with VR applications. The rest are inexperienced virtual reality users. Participants will take part on both qualitative and quantitative evaluation. The performance analysis procedure will be described in the following subsection, indicating the concrete tasks to be performed by each participant. Procedure The system performance analysis begins with the quantitative evaluation. In this first phase, the user will be embodied in a controlled scenario 2 where 30 different objects will be spawned in a delimited area, with random orientation, and in the same order as represented in Figure 5. The user will try to grasp the object as he would do in real life and as quickly as possible. For each grasping, the system will compute the error metric and will also store the time spent by the user in grasping the object. The purpose of this first phase is to visually analyze grasping quality which is directly related to user expertise in VR environments and concretely with our grasping system. An experienced user would know system limits both when interacting with complex geometries or with large objects that would make it difficult to perform the grasp action quickly and naturally. For the qualitative evaluation, the same user will be embodied in a photorealistic scenario changing mannequin hands by human hand model with realistic textures. After interacting freely in the photorealistic virtual environment 3 , the user will have to answer the evaluation questionnaire defined in Table 1. The main purpose is the evaluation of interaction realism, finger and hand movement naturalness and motor control, among other qualitative aspects regarding user experience in VR environments. RESULTS AND DISCUSSION In this section we will discuss and analyze the results obtained from the performance analysis process. On the one hand, we will draw conclusions from the average error obtained in grasping each object by each participant group, and also from the overall error per object taking into account all the participants (see Figure 7). On the other hand, we obtained the average elapsed time needed in grasping each object for each participant group, and also the average elapsed time needed for each object taking into account all the participants (see Figure 6). This will allow us to draw conclusions about the most difficult objects to manipulate in terms of accuracy and elapsed time for grasping. Moreover, we can compare system performance used by inexperienced users in comparison with experienced ones. Qualitative evaluation Qualitative evaluation for each participant was calculated using the Equation 3 obtaining a score for each qualitative 2. https://youtu.be/4sPhLbHpywM 3. https://youtu.be/65gdFdwsTVg aspect. In Table 2 we represent for each group of participants: the average score for each evaluation aspect and the total embodiment score computed using the Equation 4. Regarding the represented results in Table 2 of experienced users has been more disadvantageous as they have a more elaborated criterion given their previous experience with virtual reality applications. Finger movement realism (aspect 2) was evaluated similarly by both groups. This is because the hand closing and opening gestures are guided by the same animation in both cases. Finally, the reported results referring to the interaction realism have been the lowest in both cases. This is mostly because users cannot control their individual fingers movement, since general hand gesture is controlled by a unique trigger button of the controller. However, overall obtained embodiment score is 2.08 out of 3.0. Quantitative evaluation As expected, inexperienced users have taken longer to grasp almost all the object set due to the lack of practice and expertise with the system. This is clearly represented in Figure 6 where experienced users only have taken longer in grasping some tools such as, the flat screwdriver ( Figure 5z) and hammer (Figure 5aa). Inexperienced users take an average of 0.36 seconds longer to grab the objects. In practice and regarding interaction, this is not a factor that is going to make a crucial difference. Analyzing Figure 6, the tuna fish can (Figure 5f), potted meat can (Figure 5h), spatula (Figure 5u), toy airplane ( Figure 5ad) and bleach cleaner (Figure 5q) are the most time consuming when grasped by the users. This is mainly because of their sizes and complex geometries. Since objects are spawned with a random orientation, this fact can affect grasping times. Even so, we can conclude that the largest objects are those that the user takes the longest to grasp. Regarding Figure 7 we can observe that the errors obtained by both groups of participants are quite similar. Most significant differences were observed in the case of power drill (Figure 5v) and the spatula. The power drill has a complex geometry and its size also hinders its grasp as the same as spatula and toy airplane. Analyzing the overall error in the Figure 7, we conclude that the largest objects, such as the toy airplane, power drill, and bleach cleaner are those reporting most error rate. In addition, we observe how overall error decreases from the first objects to the last ones. This is mainly because, user skills and expertise with the grasping system are improving progressively. Moreover, results refer to a steep learning curve. APPLICATIONS Our grasping system can be applied to several existing problems in different areas of interest, such as: robotics [26], rehabilitation [27] and interaction using augmented reality [28]. In robotics, different works have been explored to implement robust grasp approaches that allow robots to interact with the environment. These contributions are organized in mainly four different blocks [29]: methods that rely on known objects and previously estimated grasp points [30], grasping methods for familiar objects [31], methods for unknown objects based on the analysis of object geometry [32] and automatic learning approaches [33]. Our approach is more closely related to this last block, where its use would potentially be a relevant contribution. As a direct application, our system enables human-robot knowledge transfer where robots try to imitate human behaviour in performing grasping. Our grasping system is also useful for rehabilitation of patients with hand motor difficulties, and it could even be done remotely assisted by an expert [34], or through an automatic system [35]. Several works have demonstrated the viability of patient rehabilitation in virtual environments [27], helping them to improve the mobility of their hands in daily tasks [36]. Our novel error metric in combination with other automatic learning methods, can be used to guide patients during rehabilitation with feedback information and instructions. This will make rehabilitation a more attractive process, by quantifying the patient progress and visualizing its improvements over the duration of rehabilitation. Finally, our grasping system integrated in UnrealROX [2] enables many other computer vision and artificial intelligence applications by providing synthetic ground truth data, such as depth and normal maps, object masks, trajectories, stereo pairs, etc. of the virtual human hands interacting with real objects from the YCB dataset ( Figure 8). Hand movement is based on a single animation regardless object geometry. Depending on the object shape we could vary grasping gesture: sphericalgrasping, cylindrical-grasping, finger-pinch, keypinch, etc. However, our grasping gesture was experimentally the best when dealing with different shaped objects. LIMITATIONS AND FUTURE WORKS • The object can be grasped with only one hand. The user can interact with different objects using both hands at the same time. But not the same object with both hands. • Sometimes it is difficult to deal with large objects due to the initial hand posture or because objects slide out from the hand palm due to physical collisions. Experienced users can better deal with this problem. As future work, and in order to improve our grasping system, we could vary the hand grip gesture according to the object geometry we are manipulating. This is finding a correspondence between object geometry and a simple shape, e.g. a tennis ball is similar to a sphere thus proceeding with a spherical grasp movement. At the application level, there are several possibilities as we discussed in the previ-ous section. However, we would like to emphasize the use of contact points obtained when grasping an object in virtual reality, to transfer that knowledge and human behavior to real robots. CONCLUSION This work proposes a flexible and realistic looking grasping system which enables smooth and real-time interaction in virtual reality environments with arbitrary shaped objects. This approach is unconstrained by the object geometry, it is fully controlled by the user and it is modular and easily implemented on different meshes or skeletal configurations. In order to validate our approach, an exhaustive evaluation process was carried out. Our system was evaluated qualitatively and quantitatively by two groups of participants: with previous experience in virtual reality environments (experienced users) and without expertise in VR (inexperienced). For the quantitative evaluation, a new error metric has been proposed to evaluate each grasp, quantifying hand-object overlapping. From the performance analysis results, we conclude that user overall experience was satisfactory and positive. Analyzing the quantitative evaluation, the error difference between experienced users and non experienced is subtle. Moreover, average errors are progressively smaller as more object are grasped. This clearly indicates a steep learning curve. In addition, the qualitative analysis points to a natural and realistic interaction. Users can freely manipulate previously defined dynamic objects in the photorealistic environment. Moreover, grasping contact points can be easily extracted, thus enabling numerous applications, especially in the field of robotics. Unreal Engine 4 project source code is available at GitHub alongside several video demonstrations. This approach can easily be implemented on different game engines.
5,795
1903.05238
2963943458
Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environments. Resulting grasps are visually realistic because hand is automatically fitted to the object shape from a position and orientation determined by the user using the VR handheld controllers (e.g. Oculus Touch motion controllers). Our approach is flexible because it can be adapted to different hand meshes (e.g. human or robotic hands) and it is also easily customizable. Moreover, it enables interaction with different objects regardless their geometries. In order to validate our proposal, an exhaustive qualitative and quantitative performance analysis has been carried out. On one hand, qualitative evaluation was used in the assessment of abstract aspects, such as motor control, finger movement realism, and interaction realism. On the other hand, for the quantitative evaluation a novel metric has been proposed to visually analyze the performed grips. Performance analysis results indicate that previous experience with our grasping system is not a prerequisite for an enjoyable, natural and intuitive VR interaction experience.
The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) @cite_1 @cite_28 . For the same purpose, @cite_22 studied the correlations between hand DOFs aiming to simplify hand models reducing DOF number. The results suggest to simplify hand models by reducing DOFs from 50 to 15 for both hands in conjunction without loosing relevant features.
{ "abstract": [ "In this paper, we build upon recent advances in neuroscience research which have shown that control of the human hand during grasping is dominated by movement in a configuration space of highly reduced dimensionality. We extend this concept to robotic hands and show how a similar dimensionality reduction can be defined for a number of different hand models. This framework can be used to derive planning algorithms that produce stable grasps even for highly complex hand designs. Furthermore, it offers a unified approach for controlling different hands, even if the kinematic structures of the models are significantly different. We illustrate these concepts by building a comprehensive grasp planner that can be used on a large variety of robotic hands under various constraints.", "Abstract This article reports an experimental study that aimed to quantitatively analyze motion coordination patterns across digits 2–5 (index to little finger), and examine the kinematic synergies during manipulative and gestic acts. Twenty-eight subjects (14 males and 14 females) performed two types of tasks, both right-handed: (1) cylinder-grasping that involved concurrent voluntary flexion of digits 2–5, and (2) voluntary flexion of individual fingers from digit 2 to 5 (i.e., one at a time). A five-camera opto-electronic motion capture system measured trajectories of 21 miniature reflective markers strategically placed on the dorsal surface landmarks of the hand. Joint angular profiles for 12 involved flexion–extension degrees of freedom (DOF's) were derived from the measured coordinates of surface markers. Principal components analysis (PCA) was used to examine the temporal covariation between joint angles. A mathematical modeling procedure, based on hyperbolic tangent functions, characterized the sigmoidal shaped angular profiles with four kinematically meaningful parameters. The PCA results showed that for all the movement trials ( n =280), two principal components accounted for at least 98 of the variance. The angular profiles ( n =2464) were accurately characterized, with the mean (±SD) coefficient of determination ( R 2 ) and root-mean-square-error (RMSE) being 0.95 (±0.12) and 1.03° (±0.82°), respectively. The resulting parameters which quantified both the spatial and temporal aspects of angular profiles revealed stereotypical patterns including a predominant (87 of all trials) proximal-to-distal flexion sequence and characteristic interdependence – involuntary joint flexion induced by the voluntarily flexed joint. The principal components' weights and the kinematic parameters also exhibited qualitatively similar variation patterns. Motor control interpretations and new insights regarding the underlying synergistic mechanisms, particularly in relation to previous findings on force synergies, are discussed.", "" ], "cite_N": [ "@cite_28", "@cite_1", "@cite_22" ], "mid": [ "2138983671", "2066864006", "" ] }
A Visually Plausible Grasping System for Object Manipulation and Interaction in Virtual Reality Environments
W ITH the advent of affordable VR headsets such as Oculus VR/Go and HTC Vive, many works and projects are using virtual environments for different purposes. Most of VR applications are related to the entertainment industry (i.e. games and 3D cinema) or architectural visualizations, where virtual scene realism is a cornerstone. Currently existing VR systems are limited by their resolution, field-of-view, frame rate, and interaction among other technical specifications. In order to enhance user VR experience, developers are also focused on implementing rich interactions with the virtual environment, allowing the user to explore, interact and manipulate scene objects as in the real world. Interaction is a crucial feature for training/simulation applications (e.g. flight, driving and medical simulators), and also teleoperation (e.g. robotics), where the user ability to interact and explore the simulated environments is paramount for achieving an immersive experience. For this purpose, most of VR devices come with a pair of handheld controllers which are fully tracked in 3D space and specifically designed for interaction. One of the most basic interaction tasks is object grasping and manipulation. In order to achieve an enjoyable experience in VR, a realistic, flexible and real-time grasping system is needed. However, • Sergiu Oprea, Pablo M. Gonzalez, Alberto G. Garcia grasp synthesis in manipulation tasks is not straightforward because of the unlimited number of different hand configurations, the variety of object types and their geometries, and also due to the selection of the most suitable grasp for every different object in terms of realism, kinematics and physics. Currently existing real-time approaches in VR are purely animation-driven, completely relying on the animations realism. Moreover, these approaches are constrained to a limited number of simple object geometries and unable to deal with unknown objects. For every different object type and geometry, predefined animations are needed. This fact hinders the user experience, limiting its interaction capabilities. For a complete immersion user should be able to interact and manipulate different virtual objects as in the real world. In this paper, we propose a real-time grasping system for object interaction in virtual reality environments. We aim to achieve natural and visually plausible interactions in photorealistic environments rendered by Unreal Engine. Taking advantage of headset tracking and motion controllers, a human operator can be embodied in such environments as a virtual human or robot agent to freely navigate and interact with objects. Our grasping system is able to deal with different object geometries, without the need of a predefined grasp animation for each. With our approach, fingers are automatically fitted to object shape and geometry. We constrain hand finger phalanges motion checking in realtime for collisions with the object geometry. Our grasping system was analyzed both qualitatively and quantitatively. On one side, for the qualitative analysis, grasping system was implemented in a photorealistic envi-arXiv:1903.05238v1 [cs.GR] 12 Mar 2019 ronment where the user is freely able to interact with real world objects extracted from the YCB dataset [1] (see Figure 1). The qualitative evaluation is based on a questionnaire that will address the user interaction experience in terms of realism during object manipulation and interaction, system flexibility and usability, and general VR experience. On the other side, a quantitative grasping system analysis was carried out, contrasting the elapsed time a user needs in grasping an object and grasp quality based on a novel error metric which quantifies the overlapping between hand fingers and grasped object. From the quantitative evaluation, we obtain individual errors for the last two phalanges of each finger, the time user needed to grasp the object and also the contact points. This information alongside other provided by UnrealROX [2] such as depth mpas, instance segmentations, normal maps, 3D bounding boxes and 6D object pose (see Figure 8), enables different robotic applications as described in Section 6. In summary, we make the three following contributions: • We propose a real-time, realistic looking and flexible grasping system for natural interaction with arbitrary shaped objects in virtual reality environments; • We propose a novel metric and procedure to analyze visual grasp quality in VR interactions quantifying hand-object overlapping; • We provide the contact points extracted during the interaction in both local and global system coordinates. The rest of the paper is structured as follows. First of all, Section 2 analyzes the latest works related to object interaction and manipulation in virtual environments. The core of this work is comprised in Section 3 where our approach is described in detail. Then, the performance analysis, with the qualitative and our novel quantitative evaluations, is discussed in Section 4. Analysis results are reported in Section 5. Then, several applications are discussed in Section 6. After that, limitations of our approach are covered in Section 7 alongside several feature works. Finally, some conclusions are drawn in the last Section 8. Data-driven approaches Grasping data-driven approaches have existed since a long time ago [3]. These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between different grasp types. From this database, grasp poses are selected according with given object shape and geometry [6] [7]. Li et al. [6] construct a database with different hand poses and also object shapes and sizes. Despite having a good database, the process of hand poses selection is not straightforward since there can be multiple equally valid possibilities for the same gesture. To address this problem, Li et al. [6] proposed a shape-matching algorithm which returns multiple potential grasp poses. The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) [8] [9]. For the same purpose, Jorg et al. [10] studied the correlations between hand DOFs aiming to simplify hand models reducing DOF number. The results suggest to simplify hand models by reducing DOFs from 50 to 15 for both hands in conjunction without loosing relevant features. Hybrid data-driven approaches In order to achieve realistic object interactions, physical simulations on the objects should also be considered [11] [12] [13]. Moreover, hand and finger movement trajectories need to be both, kinematically and dynamically valid [14]. Pollard et al. [11] simulate hand interaction, such as two hands grasping each other in the handshake gesture. Bai et al. [13] simulate grasping an object, drop it on a specific spot on the palm and let it roll on the hand palm. A limitation of this approach is that information about the object must be known in advance, which disable robot to interact with unknown objects. By using an initial grasp pose and a desired object trajectory, the algorithm proposed by Liu [15] can generate physically-based hand manipulation poses varying the contact points with the object, grasping forces and also joint configurations. This approach works well for complex manipulations such as twist-opening a bottle. Ye and Liu [14] reconstruct a realistic hand motion and grasping generating feasible contact point trajectories. Selection of valid motions is defined as a randomized depthfirst tree traversal, where nodes are recursively expanded if they are kinematically and dynamically feasible. Otherwise, backtracking is performed in order to explore other possibilities. Virtual reality approaches This section is limited to virtual reality interaction using VR motion controllers, avoiding glove-based and bare-hand approaches. Implementation of the aforementioned techniques in virtual reality environments is a difficult task cause optimizations are needed to keep processes running in real time. Most of current existing approaches for flexible and realistic grasping are not suitable for real-time interaction. VR developers aim to create fast solutions with realistic and natural interactions. Recent approaches are directly related to the entertainment industry, i.e. video games. An excellent example is Lone Echo, a narrative adventure game which consists of manipulating tools and objects for solving puzzles. Hand animations are mostly procedurally generated, enabling grasping of complex geometries regardless their grasp angle. This approach [16] is based on a graph traversal heuristic which searches intersections between hand fingers and object surface mesh triangles. A* heuristic find the intersection that is nearest to the palm and also avoid invalid intersections. After calculating angles to make contact with each intersection point, highest angle is selected and fingers are rotated accordingly. Mostly implemented solutions in VR are animationbased [17] [18] [19]. These approaches are constrained to a limited number of simple object geometries and are unable to deal with unknown objects. Movements are predefined for concrete object geometries, hindering user interaction capabilities in the virtual environment. In [17], distance grab selection technique is implemented to enhance the user comfort when interacting in small play areas, while sitting or for grabbing objects on the floor. Grasping system is based on three trigger volumes attached to each hand: two small cylinders for short-range grasp, and a cone for long-range grabbing. Based on this approach, we used trigger volumes attached to finger phalanges to control its movement and detect object collisions more precisely. In this way we achieve a more flexible and visually plausible grasping system enhancing immersion and realism during interactions. GRASPING SYSTEM With the latest advances in rendering techniques, visualization of virtual reality (VR) environments is increasingly more photorealistic. Besides graphics, which are the cornerstone of most VR solutions, interaction is also an essential part to enhance the user experience and immersion. VR scene content is portrayed in a physically tangible way, inviting users to explore the environment, and interact or manipulate represented objects as in the real world. VR devices aim to provide very congruent means of primary interaction, described as a pair of handheld devices with very accurate 6D one-to-one tracking. The main purpose is to create rich interactions producing memorable and satisfying VR experiences. Most of the currently available VR solutions and games lack of a robust and natural object manipulation and interaction capabilities. This is because, bringing natural and intuitive interactions to VR is not straightforward, which makes VR development challenging at this stage. Interactions need to be in real-time and maintaining a high and solid frame rate, directly mapping user movement to VR input in order to avoid VR sickness (visual and vestibular mismatch). Maintaining the desired 90 frames per second (FPS) in a photorealistic scene alongside complex interactions is not straightforward. This indicates the need of a flexible grasping system designed to naturally and intuitively manipulate unknown objects of different geometries in real-time. Overview Our grasping approach was designed for real-time interaction and manipulation in virtual reality environments by providing a simple, modular, flexible, robust, and visually realistic grasping system. Its main features are described as follows: • Simple and modular: it can be easily integrated with other hand configurations. Its design is modular and adaptable to different hand skeletals and models. • Flexible: most of the currently available VR grasp solutions are purely animation-driven, thus limited to known geometries and unable to deal with previously unseen objects. In contrast, our grasping system is flexible as it allows interaction with unknown objects. In this way, the user can freely decide the object to interact with, without any restrictions. • Robust: unknown objects can have different geometries. However, our approach is able to adapt the virtual hand to objects, regardless of their shape. • Visually realistic: grasping action is fully controlled by the user, taking advantage of its previous experience and knowledge in grasping daily common realistic objects such as cans, cereal boxes, fruits, tools, etc. This makes resulting grasping visually realistic and natural just as a human would in real life. The combination of the above described features makes VR interaction a pleasant user experience, where object manipulation is smooth and intuitive. Our grasping works by detecting collisions with objects through the use of trigger actors placed experimentally on the finger phalanges. A trigger actor is a component from Unreal Engine 4 used for casting an event in response to an interaction, e.g. collision with another object. These components can be of different shapes, such as capsule, box, sphere, etc. In the Figure 2 capsule triggers are represented in green and palm sphere trigger in red. We experimentally placed two capsule triggers on the last two phalanges of each finger. We noticed that this configuration is the most effective in detecting objects collisions. Notice that collision detection is performed for each frame, so, for heavy configurations with many triggers, performance would be harmed. Components Our grasping system is composed of the components represented in the Figure 3. These components are defined as follows: • Object selection: selects the nearest object to the hand palm. Detection area is determined by the sphere Figure 2). The sphere trigger returns the world location of all the overlapped actors. As a result, the nearest actor can be determined by computing the distance from each overlapped actor to the center of the sphere trigger. Smallest distance will determine the nearest object, saving its reference for the other components. • Interaction manager: manages capsule triggers which are attached to finger phalanges as represented in Figure 2. If a capsule trigger reports an overlap event, the movement of its corresponding phalanx is blocked until hand is reopened or the overlapping with the manipulated object is over. The phalanx state (blocked or in movement) will be used as input to the grasping logic component. A phalanx is blocked if there is an overlap of the its corresponding capsule trigger with the manipulated object. • Finger movement: this component determines the movement of the fingers during the hand closing and opening animations. It ensures a smooth animation avoiding unexpected and unrealistic behavior in finger movement caused neither by a performance drop or other interaction issues. Basically, it monitors each variation in the rotation value of the phalanx. In the case of detecting an unexpected variation (i.e. big variation) during a frame change, missing intermediate values will be interpolated so as to keep finger movement smooth. • Grasping logic: this component manages when to grab or release an object. This decision is made based on the currently blocked phalanges determined with the interaction manager component. The object is grasped or released based on the following function: f (x) = true, if (th ph ∨ palm) ∧ (in ph ∨ mi ph ) f alse, otherwise(1) , where x = (th ph , in ph , mi ph , palm) is defined as th ph = thumb mid ∨ thumb dist in ph = index mid ∨ index dist mi ph = middle mid ∨ middle dist(2) Equation 1 determines when an object is grasped or released based on the inputs determined in Equation 2 where th ph , in ph , and mi ph , are the thumb, index and middle phalanges respectively. According to human hand morphology, mid and dist subscripts refer to the middle and distal phalanx (e.g. thumb dist references the distal phalanx of thumb finger and at the implementation level it is a boolean value). Implementation details Grasping system has been originally implemented in Unreal Engine 4 (UE4), however, it can be easily implemented in other engines such as Unity, which would also provide us with the necessary components for replicating the system (e.g. overlapping triggers). The implementation consists of UE4 blueprints and has been correctly structured in the components depicted in Figure 3 and described in the previous section. Implementation is available at Github 1 . PERFORMANCE ANALYSIS In order to validate our proposal, a complete performance analysis has been carried out. This analysis covers from a qualitative evaluation, which is prevalent in the assessment of VR systems, to a novel quantitative evaluation. Evaluation methods are briefly described as follows: • Qualitative evaluation: based on the user experience interacting with real objects from the YCB dataset in a photorealistic indoor scenario. Its purpose is to assess interaction realism, immersion, hand movement naturalness and other qualitative aspects described in Table 1 from the Subsection 4.1, which addresses qualitative evaluation in detail. • Quantitative evaluation: based on the grasping quality in terms of realism (i.e. how much it is visually plausible). We consider a visually plausible grasp when hand palm or fingers are level with the object surface, as in a real life grasping. However, when dealing with complex meshes, the collision detection precision can be significantly influenced. In this case, fingers could penetrate the object surface, or remain above its surface when a collision was detected earlier than expected. This would result in an unnatural and unrealistic grasp. To visually quantify grasping quality, we purpose a novel error metric based on computing the distance from each capsule trigger to the nearest contact point on the object surface. Quantitative evaluation and the proposed error metric are addressed in detail in Subsection 4.2. Qualitative evaluation Most VR experiments include qualitative and quantitative studies to measure its realism and immersion. Arguably, questionnaires are the default method to qualitatively assess any experience and the vast majority of works include them in one way or another [20] [21] [22]. However, one of the main problems with them is the absence of a standardized set of questions for different experiences that allows for 1. https://github.com/3dperceptionlab/unrealgrasp fair and easy comparisons. The different nature of the VR systems and experiences makes it challenging to find a set of evaluation questions that fits them all. Following the efforts of [23] towards a standardized embodiment questionnaire, we analyzed several works in the literature [24] [25] that included questionnaires to assess VR experiences to devise a standard one for virtual grasping systems. Inspired by such works, we have identified three main types of questions or aspects: • Motor Control: this aspect considers the movement of the virtual hands as a whole and its responsiveness to the virtual reality controllers. Hands should move naturally and their movements must be caused exactly by the controllers without unwanted movements and without limiting or restricting real movements to adapt to the virtual ones. • Finger Movement: this aspect takes the specific finger movement into account. Such movements must be natural and plausibly. Moreover, they must react properly to the user's intent. • Interaction Realism: this aspect is related to the interaction of the hand and fingers with objects. The questionnaire, shown in Table 1, is composed of fourteen questions related to the previously described aspects. Following [23], the users of the study will be pre- It seemed as if the virtual fingers were mine when grabbing an object Q10 I felt that grabbing objects was clumsy and hard to achieve Q11 It seemed as if finger movement were guided and unnatural Q12 I felt that grasps were visually correct and natural Q13 I felt that grasps were physically correct and natural Q14 It seemed that fingers were adapting properly to the different geometries sented with such questions right after the end of the experience in a randomized order to limit context effects. In addition, questions must be answered following the 7-point Likert-scale: (+3) strongly agree, (+2) agree, (+1) somewhat agree, (0) neutral, (-1) somewhat disagree, (-2) disagree, and (-3) strongly disagree. Results will be presented as a single embodiment score using the following equations: Motor Control = ((Q1 + Q2) − (Q3 + Q4))/4 Finger Movement Realism = (Q5 + Q6 + Q7)/3 Interaction Realism = ((Q8 + Q9) − (Q10 + Q11) + Q12 + Q13 + Q14)/7(3) , using the results of each individual aspect, we obtain the total embodiment score as follows: Score = (Motor Control + Finger Movement Realism + Interaction Realism * 2)/4 The interaction realism is the key aspect of this qualitative evaluation. So that, in the Equation 4 we emphasize this aspect by weighting it higher. Quantitative evaluation With the quantitative evaluation, we aim to evaluate grasping quality in terms of how much it is visually plausible or realistic. In other words, our purpose is to visually quantify our grasping performance, analyzing each finger position and how it fits the object mesh. When a collision is detected by a capsule trigger, we proceed with the calculation of the nearest distance between the finger phalanx surface (delimited by the capsule trigger) and the object mesh (see Equation 8). In Figure 4 the red capsules are representing 3D sphere tracing volumes which provide information of the nearest collision from the trace starting point to the first contact point on the object surface which intersects the sphere volume. For each finger phalanx with an attached capsule trigger represented in green, we throw a sphere trace obtaining the nearest contact points on the object surface represented as lime colored dots (impact point, Ip). In this representation, the total error for the index finger would be the average of the sum of the distances in millimeters between the surface of each phalanx and the nearest contact point on the object surface (see Equation 9). The nearest distance computation is approximated by an equation that was developed to find the distance between the impact point, and the plane that contains the capsule trigger center point and is perpendicular to the longitudinal axis of the red capsule. Capsule triggers centers are located on the surface of the hand mesh, so this computation should approximate the nearest distance to the mesh well enough, without being computationally too demanding. To compute this distance, we define the following vectors from the three input points (the starting point of the red capsule, the impact point and the capsule trigger center point): − − → D Ip = Ip − Sp − −− → D CT c = CT c − Sp(5) where − − → D Ip is the vector from the starting point to the impact point, and − −− → D CT c vector represents the direction of the longitudinal axis of the red capsule. They are represented in navy blue and purple respectively in Figure 4. Then, we find the cosine of the angle they form through their dot product: − − → D Ip · − −− → D CT c = | − − → D Ip | * | − −− → D CT c | * cos(β) cos(β) = − − → D Ip · − −− → D CT c | − − → D Ip | * | − −− → D CT c |(6) We can now substitute that cosine when computing the projection of − − → D Ip over the longitudinal axis of the red capsule ( − − → D P r in Figure 4): | − − → D P r | = cos(β) * | − − → D Ip | | − − → D P r | = − − → D Ip · − −− → D CT c | − −− → D CT c | * | − − → D Ip | * | − − → D Ip | | − − → D P r | = − − → D Ip · − −− → D CT c | − −− → D CT c |(7) Having that module, we only have to subtract | − −− → D CT c | in order to obtain the desired distance: N D(Ip, Sp, CT c) = − − → D Ip · − −− → D CT c | − −− → D CT c | − | − −− → D CT c | N D(Ip, Sp, CT c) = − −−−− → Ip − Sp · − −−−−−− → CT c − Sp | − −−−−−− → CT c − Sp| − | − −−−−−− → CT c − Sp|(8) Computing the distance like this, with this final subtraction, allows to obtain a positive distance when impact point is outside the hand mesh, and a negative one if it is inside. We compute the nearest distance per each capsule trigger attached to a finger phalanx. As stated before, if the distance is negative, this indicates a finger penetration issue on the object surface. Otherwise, if distance is positive, it means that finger stopped above the object surface. The ideal case is when a zero distance is obtained, that is, the finger is perfectly situated on the object surface. The total error for the hand is represented by the following equation: HandError = N F ingers i=1 N CT F j=1 |N D(Ip ij , Sp ij , CT c ij )| N CapsuleT riggersP erF inger(9) Dataset To benchmark our grasping system we used a set of objects that are frequently used in daily life, such as, food items (e.g. cracker box, cans, box of sugar, fruits, etc.), tool items (e.g. power drill, hammer, screwdrivers, etc.), kitchen items (e.g. eating utensils) and also spherical shaped objects (e.g. tennis ball, racquetball, golf ball, etc.). Yale-CMU-Berkeley (YCB) Object and Model set [1] provides us these reallife 3D textured models scanned with outstanding accuracy and detail. Available objects have a wide variety of shapes, textures and sizes as we can see in Figure 5. The advantage of using real life objects is that the user already has a previous experience manipulating similar objects so he will try to grab and interact with the objects in the same way. Participants For the performance analysis, we recruited ten participants (8M/2F) from the local campus. Four of them have experience with VR applications. The rest are inexperienced virtual reality users. Participants will take part on both qualitative and quantitative evaluation. The performance analysis procedure will be described in the following subsection, indicating the concrete tasks to be performed by each participant. Procedure The system performance analysis begins with the quantitative evaluation. In this first phase, the user will be embodied in a controlled scenario 2 where 30 different objects will be spawned in a delimited area, with random orientation, and in the same order as represented in Figure 5. The user will try to grasp the object as he would do in real life and as quickly as possible. For each grasping, the system will compute the error metric and will also store the time spent by the user in grasping the object. The purpose of this first phase is to visually analyze grasping quality which is directly related to user expertise in VR environments and concretely with our grasping system. An experienced user would know system limits both when interacting with complex geometries or with large objects that would make it difficult to perform the grasp action quickly and naturally. For the qualitative evaluation, the same user will be embodied in a photorealistic scenario changing mannequin hands by human hand model with realistic textures. After interacting freely in the photorealistic virtual environment 3 , the user will have to answer the evaluation questionnaire defined in Table 1. The main purpose is the evaluation of interaction realism, finger and hand movement naturalness and motor control, among other qualitative aspects regarding user experience in VR environments. RESULTS AND DISCUSSION In this section we will discuss and analyze the results obtained from the performance analysis process. On the one hand, we will draw conclusions from the average error obtained in grasping each object by each participant group, and also from the overall error per object taking into account all the participants (see Figure 7). On the other hand, we obtained the average elapsed time needed in grasping each object for each participant group, and also the average elapsed time needed for each object taking into account all the participants (see Figure 6). This will allow us to draw conclusions about the most difficult objects to manipulate in terms of accuracy and elapsed time for grasping. Moreover, we can compare system performance used by inexperienced users in comparison with experienced ones. Qualitative evaluation Qualitative evaluation for each participant was calculated using the Equation 3 obtaining a score for each qualitative 2. https://youtu.be/4sPhLbHpywM 3. https://youtu.be/65gdFdwsTVg aspect. In Table 2 we represent for each group of participants: the average score for each evaluation aspect and the total embodiment score computed using the Equation 4. Regarding the represented results in Table 2 of experienced users has been more disadvantageous as they have a more elaborated criterion given their previous experience with virtual reality applications. Finger movement realism (aspect 2) was evaluated similarly by both groups. This is because the hand closing and opening gestures are guided by the same animation in both cases. Finally, the reported results referring to the interaction realism have been the lowest in both cases. This is mostly because users cannot control their individual fingers movement, since general hand gesture is controlled by a unique trigger button of the controller. However, overall obtained embodiment score is 2.08 out of 3.0. Quantitative evaluation As expected, inexperienced users have taken longer to grasp almost all the object set due to the lack of practice and expertise with the system. This is clearly represented in Figure 6 where experienced users only have taken longer in grasping some tools such as, the flat screwdriver ( Figure 5z) and hammer (Figure 5aa). Inexperienced users take an average of 0.36 seconds longer to grab the objects. In practice and regarding interaction, this is not a factor that is going to make a crucial difference. Analyzing Figure 6, the tuna fish can (Figure 5f), potted meat can (Figure 5h), spatula (Figure 5u), toy airplane ( Figure 5ad) and bleach cleaner (Figure 5q) are the most time consuming when grasped by the users. This is mainly because of their sizes and complex geometries. Since objects are spawned with a random orientation, this fact can affect grasping times. Even so, we can conclude that the largest objects are those that the user takes the longest to grasp. Regarding Figure 7 we can observe that the errors obtained by both groups of participants are quite similar. Most significant differences were observed in the case of power drill (Figure 5v) and the spatula. The power drill has a complex geometry and its size also hinders its grasp as the same as spatula and toy airplane. Analyzing the overall error in the Figure 7, we conclude that the largest objects, such as the toy airplane, power drill, and bleach cleaner are those reporting most error rate. In addition, we observe how overall error decreases from the first objects to the last ones. This is mainly because, user skills and expertise with the grasping system are improving progressively. Moreover, results refer to a steep learning curve. APPLICATIONS Our grasping system can be applied to several existing problems in different areas of interest, such as: robotics [26], rehabilitation [27] and interaction using augmented reality [28]. In robotics, different works have been explored to implement robust grasp approaches that allow robots to interact with the environment. These contributions are organized in mainly four different blocks [29]: methods that rely on known objects and previously estimated grasp points [30], grasping methods for familiar objects [31], methods for unknown objects based on the analysis of object geometry [32] and automatic learning approaches [33]. Our approach is more closely related to this last block, where its use would potentially be a relevant contribution. As a direct application, our system enables human-robot knowledge transfer where robots try to imitate human behaviour in performing grasping. Our grasping system is also useful for rehabilitation of patients with hand motor difficulties, and it could even be done remotely assisted by an expert [34], or through an automatic system [35]. Several works have demonstrated the viability of patient rehabilitation in virtual environments [27], helping them to improve the mobility of their hands in daily tasks [36]. Our novel error metric in combination with other automatic learning methods, can be used to guide patients during rehabilitation with feedback information and instructions. This will make rehabilitation a more attractive process, by quantifying the patient progress and visualizing its improvements over the duration of rehabilitation. Finally, our grasping system integrated in UnrealROX [2] enables many other computer vision and artificial intelligence applications by providing synthetic ground truth data, such as depth and normal maps, object masks, trajectories, stereo pairs, etc. of the virtual human hands interacting with real objects from the YCB dataset ( Figure 8). Hand movement is based on a single animation regardless object geometry. Depending on the object shape we could vary grasping gesture: sphericalgrasping, cylindrical-grasping, finger-pinch, keypinch, etc. However, our grasping gesture was experimentally the best when dealing with different shaped objects. LIMITATIONS AND FUTURE WORKS • The object can be grasped with only one hand. The user can interact with different objects using both hands at the same time. But not the same object with both hands. • Sometimes it is difficult to deal with large objects due to the initial hand posture or because objects slide out from the hand palm due to physical collisions. Experienced users can better deal with this problem. As future work, and in order to improve our grasping system, we could vary the hand grip gesture according to the object geometry we are manipulating. This is finding a correspondence between object geometry and a simple shape, e.g. a tennis ball is similar to a sphere thus proceeding with a spherical grasp movement. At the application level, there are several possibilities as we discussed in the previ-ous section. However, we would like to emphasize the use of contact points obtained when grasping an object in virtual reality, to transfer that knowledge and human behavior to real robots. CONCLUSION This work proposes a flexible and realistic looking grasping system which enables smooth and real-time interaction in virtual reality environments with arbitrary shaped objects. This approach is unconstrained by the object geometry, it is fully controlled by the user and it is modular and easily implemented on different meshes or skeletal configurations. In order to validate our approach, an exhaustive evaluation process was carried out. Our system was evaluated qualitatively and quantitatively by two groups of participants: with previous experience in virtual reality environments (experienced users) and without expertise in VR (inexperienced). For the quantitative evaluation, a new error metric has been proposed to evaluate each grasp, quantifying hand-object overlapping. From the performance analysis results, we conclude that user overall experience was satisfactory and positive. Analyzing the quantitative evaluation, the error difference between experienced users and non experienced is subtle. Moreover, average errors are progressively smaller as more object are grasped. This clearly indicates a steep learning curve. In addition, the qualitative analysis points to a natural and realistic interaction. Users can freely manipulate previously defined dynamic objects in the photorealistic environment. Moreover, grasping contact points can be easily extracted, thus enabling numerous applications, especially in the field of robotics. Unreal Engine 4 project source code is available at GitHub alongside several video demonstrations. This approach can easily be implemented on different game engines.
5,795
1903.05238
2963943458
Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environments. Resulting grasps are visually realistic because hand is automatically fitted to the object shape from a position and orientation determined by the user using the VR handheld controllers (e.g. Oculus Touch motion controllers). Our approach is flexible because it can be adapted to different hand meshes (e.g. human or robotic hands) and it is also easily customizable. Moreover, it enables interaction with different objects regardless their geometries. In order to validate our proposal, an exhaustive qualitative and quantitative performance analysis has been carried out. On one hand, qualitative evaluation was used in the assessment of abstract aspects, such as motor control, finger movement realism, and interaction realism. On the other hand, for the quantitative evaluation a novel metric has been proposed to visually analyze the performed grips. Performance analysis results indicate that previous experience with our grasping system is not a prerequisite for an enjoyable, natural and intuitive VR interaction experience.
In order to achieve realistic object interactions, physical simulations on the objects should also be considered @cite_29 @cite_11 @cite_26 . Moreover, hand and finger movement trajectories need to be both, kinematically and dynamically valid @cite_19 . @cite_29 simulate hand interaction, such as two hands grasping each other in the handshake gesture. @cite_26 simulate grasping an object, drop it on a specific spot on the palm and let it roll on the hand palm. A limitation of this approach is that information about the object must be known in advance, which disable robot to interact with unknown objects. By using an initial grasp pose and a desired object trajectory, the algorithm proposed by Liu @cite_15 can generate physically-based hand manipulation poses varying the contact points with the object, grasping forces and also joint configurations. This approach works well for complex manipulations such as twist-opening a bottle. Ye and Liu @cite_19 reconstruct a realistic hand motion and grasping generating feasible contact point trajectories. Selection of valid motions is defined as a randomized depth-first tree traversal, where nodes are recursively expanded if they are kinematically and dynamically feasible. Otherwise, backtracking is performed in order to explore other possibilities.
{ "abstract": [ "", "Animated human characters in everyday scenarios must interact with the environment using their hands. Captured human motion can provide a database of realistic examples. However, examples involving contact are difficult to edit and retarget; realism can suffer when a grasp does not appear secure or when an apparent impact does not disturb the hand or the object. Physically based simulations can preserve plausibility through simulating interaction forces. However, such physical models must be driven by a controller, and creating effective controllers for new motion tasks remains a challenge. In this paper, we present a controller for physically based grasping that draws from motion capture data. Our controller explicitly includes passive and active components to uphold compliant yet controllable motion, and it adds compensation for movement of the arm and for gravity to make the behavior of passive and active components less dependent on the dynamics of arm motion. Given a set of motion capture grasp examples, our system solves for all but a small set of parameters for this controller automatically. We demonstrate results for tasks including grasping and two-hand interaction and show that a controller derived from a single motion capture example can be used to form grasps of different object geometries.", "Capturing human activities that involve both gross full-body motion and detailed hand manipulation of objects is challenging for standard motion capture systems. We introduce a new method for creating natural scenes with such human activities. The input to our method includes motions of the full-body and the objects acquired simultaneously by a standard motion capture system. Our method then automatically synthesizes detailed and physically plausible hand manipulation that can seamlessly integrate with the input motions. Instead of producing one \"optimal\" solution, our method presents a set of motions that exploit a wide variety of manipulation strategies. We propose a randomized sampling algorithm to search for as many as possible visually diverse solutions within the computational time budget. Our results highlight complex strategies human hands employ effortlessly and unconsciously, such as static, sliding, rolling, as well as finger gaits with discrete relocation of contact points.", "This paper introduces an optimization-based approach to synthesizing hand manipulations from a starting grasping pose. We describe an automatic method that takes as input an initial grasping pose and partial object trajectory, and produces as output physically plausible hand animation that effects the desired manipulation. In response to different dynamic situations during manipulation, our algorithm can generate a range of possible hand manipulations including changes in joint configurations, changes in contact points, and changes in the grasping force. Formulating hand manipulation as an optimization problem is key to our algorithm's ability to generate a large repertoire of hand motions from limited user input. We introduce an objective function that accentuates the detailed hand motion and contacts adjustment. Furthermore, we describe an optimization method that solves for hand motion and contacts efficiently while taking into account long-term planning of contact forces. Our algorithm does not require any tuning of parameters, nor does it require any prescribed hand motion sequences.", "Modifying motion capture to satisfy the constraints of new animation is difficult when contact is involved, and a critical problem for animation of hands. The compliance with which a character makes contact also reveals important aspects of the movement's purpose. We present a new technique called interaction capture, for capturing these contact phenomena. We capture contact forces at the same time as motion, at a high rate, and use both to estimate a nominal reference trajectory and joint compliance. Unlike traditional methods, our method estimates joint compliance without the need for motorized perturbation devices. New interactions can then be synthesized by physically based simulation. We describe a novel position-based linear complementarity problem formulation that includes friction, breaking contact, and the compliant coupling between contacts at different fingers. The technique is validated using data from previous work and our own perturbation-based estimates." ], "cite_N": [ "@cite_26", "@cite_29", "@cite_19", "@cite_15", "@cite_11" ], "mid": [ "2019165997", "2122115534", "2028098496", "2157139838", "2139760425" ] }
A Visually Plausible Grasping System for Object Manipulation and Interaction in Virtual Reality Environments
W ITH the advent of affordable VR headsets such as Oculus VR/Go and HTC Vive, many works and projects are using virtual environments for different purposes. Most of VR applications are related to the entertainment industry (i.e. games and 3D cinema) or architectural visualizations, where virtual scene realism is a cornerstone. Currently existing VR systems are limited by their resolution, field-of-view, frame rate, and interaction among other technical specifications. In order to enhance user VR experience, developers are also focused on implementing rich interactions with the virtual environment, allowing the user to explore, interact and manipulate scene objects as in the real world. Interaction is a crucial feature for training/simulation applications (e.g. flight, driving and medical simulators), and also teleoperation (e.g. robotics), where the user ability to interact and explore the simulated environments is paramount for achieving an immersive experience. For this purpose, most of VR devices come with a pair of handheld controllers which are fully tracked in 3D space and specifically designed for interaction. One of the most basic interaction tasks is object grasping and manipulation. In order to achieve an enjoyable experience in VR, a realistic, flexible and real-time grasping system is needed. However, • Sergiu Oprea, Pablo M. Gonzalez, Alberto G. Garcia grasp synthesis in manipulation tasks is not straightforward because of the unlimited number of different hand configurations, the variety of object types and their geometries, and also due to the selection of the most suitable grasp for every different object in terms of realism, kinematics and physics. Currently existing real-time approaches in VR are purely animation-driven, completely relying on the animations realism. Moreover, these approaches are constrained to a limited number of simple object geometries and unable to deal with unknown objects. For every different object type and geometry, predefined animations are needed. This fact hinders the user experience, limiting its interaction capabilities. For a complete immersion user should be able to interact and manipulate different virtual objects as in the real world. In this paper, we propose a real-time grasping system for object interaction in virtual reality environments. We aim to achieve natural and visually plausible interactions in photorealistic environments rendered by Unreal Engine. Taking advantage of headset tracking and motion controllers, a human operator can be embodied in such environments as a virtual human or robot agent to freely navigate and interact with objects. Our grasping system is able to deal with different object geometries, without the need of a predefined grasp animation for each. With our approach, fingers are automatically fitted to object shape and geometry. We constrain hand finger phalanges motion checking in realtime for collisions with the object geometry. Our grasping system was analyzed both qualitatively and quantitatively. On one side, for the qualitative analysis, grasping system was implemented in a photorealistic envi-arXiv:1903.05238v1 [cs.GR] 12 Mar 2019 ronment where the user is freely able to interact with real world objects extracted from the YCB dataset [1] (see Figure 1). The qualitative evaluation is based on a questionnaire that will address the user interaction experience in terms of realism during object manipulation and interaction, system flexibility and usability, and general VR experience. On the other side, a quantitative grasping system analysis was carried out, contrasting the elapsed time a user needs in grasping an object and grasp quality based on a novel error metric which quantifies the overlapping between hand fingers and grasped object. From the quantitative evaluation, we obtain individual errors for the last two phalanges of each finger, the time user needed to grasp the object and also the contact points. This information alongside other provided by UnrealROX [2] such as depth mpas, instance segmentations, normal maps, 3D bounding boxes and 6D object pose (see Figure 8), enables different robotic applications as described in Section 6. In summary, we make the three following contributions: • We propose a real-time, realistic looking and flexible grasping system for natural interaction with arbitrary shaped objects in virtual reality environments; • We propose a novel metric and procedure to analyze visual grasp quality in VR interactions quantifying hand-object overlapping; • We provide the contact points extracted during the interaction in both local and global system coordinates. The rest of the paper is structured as follows. First of all, Section 2 analyzes the latest works related to object interaction and manipulation in virtual environments. The core of this work is comprised in Section 3 where our approach is described in detail. Then, the performance analysis, with the qualitative and our novel quantitative evaluations, is discussed in Section 4. Analysis results are reported in Section 5. Then, several applications are discussed in Section 6. After that, limitations of our approach are covered in Section 7 alongside several feature works. Finally, some conclusions are drawn in the last Section 8. Data-driven approaches Grasping data-driven approaches have existed since a long time ago [3]. These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between different grasp types. From this database, grasp poses are selected according with given object shape and geometry [6] [7]. Li et al. [6] construct a database with different hand poses and also object shapes and sizes. Despite having a good database, the process of hand poses selection is not straightforward since there can be multiple equally valid possibilities for the same gesture. To address this problem, Li et al. [6] proposed a shape-matching algorithm which returns multiple potential grasp poses. The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) [8] [9]. For the same purpose, Jorg et al. [10] studied the correlations between hand DOFs aiming to simplify hand models reducing DOF number. The results suggest to simplify hand models by reducing DOFs from 50 to 15 for both hands in conjunction without loosing relevant features. Hybrid data-driven approaches In order to achieve realistic object interactions, physical simulations on the objects should also be considered [11] [12] [13]. Moreover, hand and finger movement trajectories need to be both, kinematically and dynamically valid [14]. Pollard et al. [11] simulate hand interaction, such as two hands grasping each other in the handshake gesture. Bai et al. [13] simulate grasping an object, drop it on a specific spot on the palm and let it roll on the hand palm. A limitation of this approach is that information about the object must be known in advance, which disable robot to interact with unknown objects. By using an initial grasp pose and a desired object trajectory, the algorithm proposed by Liu [15] can generate physically-based hand manipulation poses varying the contact points with the object, grasping forces and also joint configurations. This approach works well for complex manipulations such as twist-opening a bottle. Ye and Liu [14] reconstruct a realistic hand motion and grasping generating feasible contact point trajectories. Selection of valid motions is defined as a randomized depthfirst tree traversal, where nodes are recursively expanded if they are kinematically and dynamically feasible. Otherwise, backtracking is performed in order to explore other possibilities. Virtual reality approaches This section is limited to virtual reality interaction using VR motion controllers, avoiding glove-based and bare-hand approaches. Implementation of the aforementioned techniques in virtual reality environments is a difficult task cause optimizations are needed to keep processes running in real time. Most of current existing approaches for flexible and realistic grasping are not suitable for real-time interaction. VR developers aim to create fast solutions with realistic and natural interactions. Recent approaches are directly related to the entertainment industry, i.e. video games. An excellent example is Lone Echo, a narrative adventure game which consists of manipulating tools and objects for solving puzzles. Hand animations are mostly procedurally generated, enabling grasping of complex geometries regardless their grasp angle. This approach [16] is based on a graph traversal heuristic which searches intersections between hand fingers and object surface mesh triangles. A* heuristic find the intersection that is nearest to the palm and also avoid invalid intersections. After calculating angles to make contact with each intersection point, highest angle is selected and fingers are rotated accordingly. Mostly implemented solutions in VR are animationbased [17] [18] [19]. These approaches are constrained to a limited number of simple object geometries and are unable to deal with unknown objects. Movements are predefined for concrete object geometries, hindering user interaction capabilities in the virtual environment. In [17], distance grab selection technique is implemented to enhance the user comfort when interacting in small play areas, while sitting or for grabbing objects on the floor. Grasping system is based on three trigger volumes attached to each hand: two small cylinders for short-range grasp, and a cone for long-range grabbing. Based on this approach, we used trigger volumes attached to finger phalanges to control its movement and detect object collisions more precisely. In this way we achieve a more flexible and visually plausible grasping system enhancing immersion and realism during interactions. GRASPING SYSTEM With the latest advances in rendering techniques, visualization of virtual reality (VR) environments is increasingly more photorealistic. Besides graphics, which are the cornerstone of most VR solutions, interaction is also an essential part to enhance the user experience and immersion. VR scene content is portrayed in a physically tangible way, inviting users to explore the environment, and interact or manipulate represented objects as in the real world. VR devices aim to provide very congruent means of primary interaction, described as a pair of handheld devices with very accurate 6D one-to-one tracking. The main purpose is to create rich interactions producing memorable and satisfying VR experiences. Most of the currently available VR solutions and games lack of a robust and natural object manipulation and interaction capabilities. This is because, bringing natural and intuitive interactions to VR is not straightforward, which makes VR development challenging at this stage. Interactions need to be in real-time and maintaining a high and solid frame rate, directly mapping user movement to VR input in order to avoid VR sickness (visual and vestibular mismatch). Maintaining the desired 90 frames per second (FPS) in a photorealistic scene alongside complex interactions is not straightforward. This indicates the need of a flexible grasping system designed to naturally and intuitively manipulate unknown objects of different geometries in real-time. Overview Our grasping approach was designed for real-time interaction and manipulation in virtual reality environments by providing a simple, modular, flexible, robust, and visually realistic grasping system. Its main features are described as follows: • Simple and modular: it can be easily integrated with other hand configurations. Its design is modular and adaptable to different hand skeletals and models. • Flexible: most of the currently available VR grasp solutions are purely animation-driven, thus limited to known geometries and unable to deal with previously unseen objects. In contrast, our grasping system is flexible as it allows interaction with unknown objects. In this way, the user can freely decide the object to interact with, without any restrictions. • Robust: unknown objects can have different geometries. However, our approach is able to adapt the virtual hand to objects, regardless of their shape. • Visually realistic: grasping action is fully controlled by the user, taking advantage of its previous experience and knowledge in grasping daily common realistic objects such as cans, cereal boxes, fruits, tools, etc. This makes resulting grasping visually realistic and natural just as a human would in real life. The combination of the above described features makes VR interaction a pleasant user experience, where object manipulation is smooth and intuitive. Our grasping works by detecting collisions with objects through the use of trigger actors placed experimentally on the finger phalanges. A trigger actor is a component from Unreal Engine 4 used for casting an event in response to an interaction, e.g. collision with another object. These components can be of different shapes, such as capsule, box, sphere, etc. In the Figure 2 capsule triggers are represented in green and palm sphere trigger in red. We experimentally placed two capsule triggers on the last two phalanges of each finger. We noticed that this configuration is the most effective in detecting objects collisions. Notice that collision detection is performed for each frame, so, for heavy configurations with many triggers, performance would be harmed. Components Our grasping system is composed of the components represented in the Figure 3. These components are defined as follows: • Object selection: selects the nearest object to the hand palm. Detection area is determined by the sphere Figure 2). The sphere trigger returns the world location of all the overlapped actors. As a result, the nearest actor can be determined by computing the distance from each overlapped actor to the center of the sphere trigger. Smallest distance will determine the nearest object, saving its reference for the other components. • Interaction manager: manages capsule triggers which are attached to finger phalanges as represented in Figure 2. If a capsule trigger reports an overlap event, the movement of its corresponding phalanx is blocked until hand is reopened or the overlapping with the manipulated object is over. The phalanx state (blocked or in movement) will be used as input to the grasping logic component. A phalanx is blocked if there is an overlap of the its corresponding capsule trigger with the manipulated object. • Finger movement: this component determines the movement of the fingers during the hand closing and opening animations. It ensures a smooth animation avoiding unexpected and unrealistic behavior in finger movement caused neither by a performance drop or other interaction issues. Basically, it monitors each variation in the rotation value of the phalanx. In the case of detecting an unexpected variation (i.e. big variation) during a frame change, missing intermediate values will be interpolated so as to keep finger movement smooth. • Grasping logic: this component manages when to grab or release an object. This decision is made based on the currently blocked phalanges determined with the interaction manager component. The object is grasped or released based on the following function: f (x) = true, if (th ph ∨ palm) ∧ (in ph ∨ mi ph ) f alse, otherwise(1) , where x = (th ph , in ph , mi ph , palm) is defined as th ph = thumb mid ∨ thumb dist in ph = index mid ∨ index dist mi ph = middle mid ∨ middle dist(2) Equation 1 determines when an object is grasped or released based on the inputs determined in Equation 2 where th ph , in ph , and mi ph , are the thumb, index and middle phalanges respectively. According to human hand morphology, mid and dist subscripts refer to the middle and distal phalanx (e.g. thumb dist references the distal phalanx of thumb finger and at the implementation level it is a boolean value). Implementation details Grasping system has been originally implemented in Unreal Engine 4 (UE4), however, it can be easily implemented in other engines such as Unity, which would also provide us with the necessary components for replicating the system (e.g. overlapping triggers). The implementation consists of UE4 blueprints and has been correctly structured in the components depicted in Figure 3 and described in the previous section. Implementation is available at Github 1 . PERFORMANCE ANALYSIS In order to validate our proposal, a complete performance analysis has been carried out. This analysis covers from a qualitative evaluation, which is prevalent in the assessment of VR systems, to a novel quantitative evaluation. Evaluation methods are briefly described as follows: • Qualitative evaluation: based on the user experience interacting with real objects from the YCB dataset in a photorealistic indoor scenario. Its purpose is to assess interaction realism, immersion, hand movement naturalness and other qualitative aspects described in Table 1 from the Subsection 4.1, which addresses qualitative evaluation in detail. • Quantitative evaluation: based on the grasping quality in terms of realism (i.e. how much it is visually plausible). We consider a visually plausible grasp when hand palm or fingers are level with the object surface, as in a real life grasping. However, when dealing with complex meshes, the collision detection precision can be significantly influenced. In this case, fingers could penetrate the object surface, or remain above its surface when a collision was detected earlier than expected. This would result in an unnatural and unrealistic grasp. To visually quantify grasping quality, we purpose a novel error metric based on computing the distance from each capsule trigger to the nearest contact point on the object surface. Quantitative evaluation and the proposed error metric are addressed in detail in Subsection 4.2. Qualitative evaluation Most VR experiments include qualitative and quantitative studies to measure its realism and immersion. Arguably, questionnaires are the default method to qualitatively assess any experience and the vast majority of works include them in one way or another [20] [21] [22]. However, one of the main problems with them is the absence of a standardized set of questions for different experiences that allows for 1. https://github.com/3dperceptionlab/unrealgrasp fair and easy comparisons. The different nature of the VR systems and experiences makes it challenging to find a set of evaluation questions that fits them all. Following the efforts of [23] towards a standardized embodiment questionnaire, we analyzed several works in the literature [24] [25] that included questionnaires to assess VR experiences to devise a standard one for virtual grasping systems. Inspired by such works, we have identified three main types of questions or aspects: • Motor Control: this aspect considers the movement of the virtual hands as a whole and its responsiveness to the virtual reality controllers. Hands should move naturally and their movements must be caused exactly by the controllers without unwanted movements and without limiting or restricting real movements to adapt to the virtual ones. • Finger Movement: this aspect takes the specific finger movement into account. Such movements must be natural and plausibly. Moreover, they must react properly to the user's intent. • Interaction Realism: this aspect is related to the interaction of the hand and fingers with objects. The questionnaire, shown in Table 1, is composed of fourteen questions related to the previously described aspects. Following [23], the users of the study will be pre- It seemed as if the virtual fingers were mine when grabbing an object Q10 I felt that grabbing objects was clumsy and hard to achieve Q11 It seemed as if finger movement were guided and unnatural Q12 I felt that grasps were visually correct and natural Q13 I felt that grasps were physically correct and natural Q14 It seemed that fingers were adapting properly to the different geometries sented with such questions right after the end of the experience in a randomized order to limit context effects. In addition, questions must be answered following the 7-point Likert-scale: (+3) strongly agree, (+2) agree, (+1) somewhat agree, (0) neutral, (-1) somewhat disagree, (-2) disagree, and (-3) strongly disagree. Results will be presented as a single embodiment score using the following equations: Motor Control = ((Q1 + Q2) − (Q3 + Q4))/4 Finger Movement Realism = (Q5 + Q6 + Q7)/3 Interaction Realism = ((Q8 + Q9) − (Q10 + Q11) + Q12 + Q13 + Q14)/7(3) , using the results of each individual aspect, we obtain the total embodiment score as follows: Score = (Motor Control + Finger Movement Realism + Interaction Realism * 2)/4 The interaction realism is the key aspect of this qualitative evaluation. So that, in the Equation 4 we emphasize this aspect by weighting it higher. Quantitative evaluation With the quantitative evaluation, we aim to evaluate grasping quality in terms of how much it is visually plausible or realistic. In other words, our purpose is to visually quantify our grasping performance, analyzing each finger position and how it fits the object mesh. When a collision is detected by a capsule trigger, we proceed with the calculation of the nearest distance between the finger phalanx surface (delimited by the capsule trigger) and the object mesh (see Equation 8). In Figure 4 the red capsules are representing 3D sphere tracing volumes which provide information of the nearest collision from the trace starting point to the first contact point on the object surface which intersects the sphere volume. For each finger phalanx with an attached capsule trigger represented in green, we throw a sphere trace obtaining the nearest contact points on the object surface represented as lime colored dots (impact point, Ip). In this representation, the total error for the index finger would be the average of the sum of the distances in millimeters between the surface of each phalanx and the nearest contact point on the object surface (see Equation 9). The nearest distance computation is approximated by an equation that was developed to find the distance between the impact point, and the plane that contains the capsule trigger center point and is perpendicular to the longitudinal axis of the red capsule. Capsule triggers centers are located on the surface of the hand mesh, so this computation should approximate the nearest distance to the mesh well enough, without being computationally too demanding. To compute this distance, we define the following vectors from the three input points (the starting point of the red capsule, the impact point and the capsule trigger center point): − − → D Ip = Ip − Sp − −− → D CT c = CT c − Sp(5) where − − → D Ip is the vector from the starting point to the impact point, and − −− → D CT c vector represents the direction of the longitudinal axis of the red capsule. They are represented in navy blue and purple respectively in Figure 4. Then, we find the cosine of the angle they form through their dot product: − − → D Ip · − −− → D CT c = | − − → D Ip | * | − −− → D CT c | * cos(β) cos(β) = − − → D Ip · − −− → D CT c | − − → D Ip | * | − −− → D CT c |(6) We can now substitute that cosine when computing the projection of − − → D Ip over the longitudinal axis of the red capsule ( − − → D P r in Figure 4): | − − → D P r | = cos(β) * | − − → D Ip | | − − → D P r | = − − → D Ip · − −− → D CT c | − −− → D CT c | * | − − → D Ip | * | − − → D Ip | | − − → D P r | = − − → D Ip · − −− → D CT c | − −− → D CT c |(7) Having that module, we only have to subtract | − −− → D CT c | in order to obtain the desired distance: N D(Ip, Sp, CT c) = − − → D Ip · − −− → D CT c | − −− → D CT c | − | − −− → D CT c | N D(Ip, Sp, CT c) = − −−−− → Ip − Sp · − −−−−−− → CT c − Sp | − −−−−−− → CT c − Sp| − | − −−−−−− → CT c − Sp|(8) Computing the distance like this, with this final subtraction, allows to obtain a positive distance when impact point is outside the hand mesh, and a negative one if it is inside. We compute the nearest distance per each capsule trigger attached to a finger phalanx. As stated before, if the distance is negative, this indicates a finger penetration issue on the object surface. Otherwise, if distance is positive, it means that finger stopped above the object surface. The ideal case is when a zero distance is obtained, that is, the finger is perfectly situated on the object surface. The total error for the hand is represented by the following equation: HandError = N F ingers i=1 N CT F j=1 |N D(Ip ij , Sp ij , CT c ij )| N CapsuleT riggersP erF inger(9) Dataset To benchmark our grasping system we used a set of objects that are frequently used in daily life, such as, food items (e.g. cracker box, cans, box of sugar, fruits, etc.), tool items (e.g. power drill, hammer, screwdrivers, etc.), kitchen items (e.g. eating utensils) and also spherical shaped objects (e.g. tennis ball, racquetball, golf ball, etc.). Yale-CMU-Berkeley (YCB) Object and Model set [1] provides us these reallife 3D textured models scanned with outstanding accuracy and detail. Available objects have a wide variety of shapes, textures and sizes as we can see in Figure 5. The advantage of using real life objects is that the user already has a previous experience manipulating similar objects so he will try to grab and interact with the objects in the same way. Participants For the performance analysis, we recruited ten participants (8M/2F) from the local campus. Four of them have experience with VR applications. The rest are inexperienced virtual reality users. Participants will take part on both qualitative and quantitative evaluation. The performance analysis procedure will be described in the following subsection, indicating the concrete tasks to be performed by each participant. Procedure The system performance analysis begins with the quantitative evaluation. In this first phase, the user will be embodied in a controlled scenario 2 where 30 different objects will be spawned in a delimited area, with random orientation, and in the same order as represented in Figure 5. The user will try to grasp the object as he would do in real life and as quickly as possible. For each grasping, the system will compute the error metric and will also store the time spent by the user in grasping the object. The purpose of this first phase is to visually analyze grasping quality which is directly related to user expertise in VR environments and concretely with our grasping system. An experienced user would know system limits both when interacting with complex geometries or with large objects that would make it difficult to perform the grasp action quickly and naturally. For the qualitative evaluation, the same user will be embodied in a photorealistic scenario changing mannequin hands by human hand model with realistic textures. After interacting freely in the photorealistic virtual environment 3 , the user will have to answer the evaluation questionnaire defined in Table 1. The main purpose is the evaluation of interaction realism, finger and hand movement naturalness and motor control, among other qualitative aspects regarding user experience in VR environments. RESULTS AND DISCUSSION In this section we will discuss and analyze the results obtained from the performance analysis process. On the one hand, we will draw conclusions from the average error obtained in grasping each object by each participant group, and also from the overall error per object taking into account all the participants (see Figure 7). On the other hand, we obtained the average elapsed time needed in grasping each object for each participant group, and also the average elapsed time needed for each object taking into account all the participants (see Figure 6). This will allow us to draw conclusions about the most difficult objects to manipulate in terms of accuracy and elapsed time for grasping. Moreover, we can compare system performance used by inexperienced users in comparison with experienced ones. Qualitative evaluation Qualitative evaluation for each participant was calculated using the Equation 3 obtaining a score for each qualitative 2. https://youtu.be/4sPhLbHpywM 3. https://youtu.be/65gdFdwsTVg aspect. In Table 2 we represent for each group of participants: the average score for each evaluation aspect and the total embodiment score computed using the Equation 4. Regarding the represented results in Table 2 of experienced users has been more disadvantageous as they have a more elaborated criterion given their previous experience with virtual reality applications. Finger movement realism (aspect 2) was evaluated similarly by both groups. This is because the hand closing and opening gestures are guided by the same animation in both cases. Finally, the reported results referring to the interaction realism have been the lowest in both cases. This is mostly because users cannot control their individual fingers movement, since general hand gesture is controlled by a unique trigger button of the controller. However, overall obtained embodiment score is 2.08 out of 3.0. Quantitative evaluation As expected, inexperienced users have taken longer to grasp almost all the object set due to the lack of practice and expertise with the system. This is clearly represented in Figure 6 where experienced users only have taken longer in grasping some tools such as, the flat screwdriver ( Figure 5z) and hammer (Figure 5aa). Inexperienced users take an average of 0.36 seconds longer to grab the objects. In practice and regarding interaction, this is not a factor that is going to make a crucial difference. Analyzing Figure 6, the tuna fish can (Figure 5f), potted meat can (Figure 5h), spatula (Figure 5u), toy airplane ( Figure 5ad) and bleach cleaner (Figure 5q) are the most time consuming when grasped by the users. This is mainly because of their sizes and complex geometries. Since objects are spawned with a random orientation, this fact can affect grasping times. Even so, we can conclude that the largest objects are those that the user takes the longest to grasp. Regarding Figure 7 we can observe that the errors obtained by both groups of participants are quite similar. Most significant differences were observed in the case of power drill (Figure 5v) and the spatula. The power drill has a complex geometry and its size also hinders its grasp as the same as spatula and toy airplane. Analyzing the overall error in the Figure 7, we conclude that the largest objects, such as the toy airplane, power drill, and bleach cleaner are those reporting most error rate. In addition, we observe how overall error decreases from the first objects to the last ones. This is mainly because, user skills and expertise with the grasping system are improving progressively. Moreover, results refer to a steep learning curve. APPLICATIONS Our grasping system can be applied to several existing problems in different areas of interest, such as: robotics [26], rehabilitation [27] and interaction using augmented reality [28]. In robotics, different works have been explored to implement robust grasp approaches that allow robots to interact with the environment. These contributions are organized in mainly four different blocks [29]: methods that rely on known objects and previously estimated grasp points [30], grasping methods for familiar objects [31], methods for unknown objects based on the analysis of object geometry [32] and automatic learning approaches [33]. Our approach is more closely related to this last block, where its use would potentially be a relevant contribution. As a direct application, our system enables human-robot knowledge transfer where robots try to imitate human behaviour in performing grasping. Our grasping system is also useful for rehabilitation of patients with hand motor difficulties, and it could even be done remotely assisted by an expert [34], or through an automatic system [35]. Several works have demonstrated the viability of patient rehabilitation in virtual environments [27], helping them to improve the mobility of their hands in daily tasks [36]. Our novel error metric in combination with other automatic learning methods, can be used to guide patients during rehabilitation with feedback information and instructions. This will make rehabilitation a more attractive process, by quantifying the patient progress and visualizing its improvements over the duration of rehabilitation. Finally, our grasping system integrated in UnrealROX [2] enables many other computer vision and artificial intelligence applications by providing synthetic ground truth data, such as depth and normal maps, object masks, trajectories, stereo pairs, etc. of the virtual human hands interacting with real objects from the YCB dataset ( Figure 8). Hand movement is based on a single animation regardless object geometry. Depending on the object shape we could vary grasping gesture: sphericalgrasping, cylindrical-grasping, finger-pinch, keypinch, etc. However, our grasping gesture was experimentally the best when dealing with different shaped objects. LIMITATIONS AND FUTURE WORKS • The object can be grasped with only one hand. The user can interact with different objects using both hands at the same time. But not the same object with both hands. • Sometimes it is difficult to deal with large objects due to the initial hand posture or because objects slide out from the hand palm due to physical collisions. Experienced users can better deal with this problem. As future work, and in order to improve our grasping system, we could vary the hand grip gesture according to the object geometry we are manipulating. This is finding a correspondence between object geometry and a simple shape, e.g. a tennis ball is similar to a sphere thus proceeding with a spherical grasp movement. At the application level, there are several possibilities as we discussed in the previ-ous section. However, we would like to emphasize the use of contact points obtained when grasping an object in virtual reality, to transfer that knowledge and human behavior to real robots. CONCLUSION This work proposes a flexible and realistic looking grasping system which enables smooth and real-time interaction in virtual reality environments with arbitrary shaped objects. This approach is unconstrained by the object geometry, it is fully controlled by the user and it is modular and easily implemented on different meshes or skeletal configurations. In order to validate our approach, an exhaustive evaluation process was carried out. Our system was evaluated qualitatively and quantitatively by two groups of participants: with previous experience in virtual reality environments (experienced users) and without expertise in VR (inexperienced). For the quantitative evaluation, a new error metric has been proposed to evaluate each grasp, quantifying hand-object overlapping. From the performance analysis results, we conclude that user overall experience was satisfactory and positive. Analyzing the quantitative evaluation, the error difference between experienced users and non experienced is subtle. Moreover, average errors are progressively smaller as more object are grasped. This clearly indicates a steep learning curve. In addition, the qualitative analysis points to a natural and realistic interaction. Users can freely manipulate previously defined dynamic objects in the photorealistic environment. Moreover, grasping contact points can be easily extracted, thus enabling numerous applications, especially in the field of robotics. Unreal Engine 4 project source code is available at GitHub alongside several video demonstrations. This approach can easily be implemented on different game engines.
5,795
cmp-lg9804001
1742257591
Graph Interpolation Grammars are a declarative formalism with an operational semantics. Their goal is to emulate salient features of the human parser, and notably incrementality. The parsing process defined by GIGs incrementally builds a syntactic representation of a sentence as each successive lexeme is read. A GIG rule specifies a set of parse configurations that trigger its application and an operation to perform on a matching configuration. Rules are partly context-sensitive; furthermore, they are reversible, meaning that their operations can be undone, which allows the parsing process to be nondeterministic. These two factors confer enough expressive power to the formalism for parsing natural languages.
Graph interpolation can be viewed as an extension of tree adjunction to parse graphs. And, indeed, TAGs @cite_2 , by introducing a 2-dimensional formalism into computational linguistics, have made a decisive step towards designing a syntactic theory that is both computationally tractable and linguistically realistic. In this respect, it is an obligatory reference for any syntactic theory intent on satisfying these criteria.
{ "abstract": [ "In this paper, a tree generating system called a tree adjunct grammar is described and its formal properties are studied relating them to the tree generating systems of Brainerd (Information and Control14 (1969), 217-231) and Rounds (Mathematical Systems Theory 4 (1970), 257-287) and to the recognizable sets and local sets discussed by Thatcher (Journal of Computer and System Sciences1 (1967), 317-322; 4 (1970), 339-367) and Rounds. Linguistic relevance of these systems has been briefly discussed also." ], "cite_N": [ "@cite_2" ], "mid": [ "2130630493" ] }
0
cmp-lg9804001
1742257591
Graph Interpolation Grammars are a declarative formalism with an operational semantics. Their goal is to emulate salient features of the human parser, and notably incrementality. The parsing process defined by GIGs incrementally builds a syntactic representation of a sentence as each successive lexeme is read. A GIG rule specifies a set of parse configurations that trigger its application and an operation to perform on a matching configuration. Rules are partly context-sensitive; furthermore, they are reversible, meaning that their operations can be undone, which allows the parsing process to be nondeterministic. These two factors confer enough expressive power to the formalism for parsing natural languages.
In Lexical Functional Grammars @cite_4 , grammatical functions are loosely coupled with phrase structure, which seems to be just the opposite of what is done in a GIG, in which functional edges are part of the phrase structure. Nonetheless, these two approaches share the concern of bringing out a functional structure, even if much of what enters into an f-structure (i.e. a functional structure) in LFG is to be addressed by the semantic component ---a topic for further research--- in GIG.
{ "abstract": [ "The editor of this volume, who is also author or coauthor of five of the contributions, has provided an introduction that not only affords an overview of the separate articles but also interrelates the basic issues in linguistics, psycholinguistics and cognitive studies that are addressed in this volume. The twelve articles are grouped into three sections, as follows: \"I. Lexical Representation: \" The Passive in Lexical Theory (J. Bresnan); On the Lexical Representation of Romance Reflexive Clitics (J. Grimshaw); and Polyadicity (J. Bresnan).\"II. Syntactic Representation: \" Lexical-Functional Grammar: A Formal Theory for Grammatical Representation (R. Kaplan and J. Bresnan); Control and Complementation (J. Bresnan); Case Agreement in Russian (C. Neidle); The Representation of Case in Icelandic (A. Andrews); Grammatical Relations and Clause Structure in Malayalam (K. P. Monahan); and Sluicing: A Lexical Interpretation Procedure (L. Levin).\"III. Cognitive Processing of Grammatical Representations: \" A Theory of the Acquisition of Lexical Interpretive Grammars (S. Pinker); Toward a Theory of Lexico-Syntactic Interactions in Sentence Perception (M. Ford, J. Bresnan, and R. Kaplan); and Sentence Planning Units: Implications for the Speaker's Representation of Meaningful Relations Underlying Sentences (M. Ford)." ], "cite_N": [ "@cite_4" ], "mid": [ "2032527312" ] }
0
cmp-lg9709004
1575569168
Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one.
To our knowledge, lexical databases have been used only once in TC. Hearst @cite_10 adapted a disambiguation algorithm by Yarowsky using WordNet to recognize category occurrences. Categories are made of WordNet terms, which is not the general case of standard or user-defined categories. It is a hard task to adapt WordNet subsets to pre-existing categories, especially when they are domain dependent. Hearst's approach shows promising results confirmed by the fact that our WordNet -based approach performs at least equally to a simple training approach.
{ "abstract": [ "This dissertation investigates the role of contextual information in the automated retrieval and display of full-text documents, using robust natural language processing algorithms to automatically detect structure in and assign topic labels to texts. Many long texts are comprised of complex topic and subtopic structure, a fact ignored by existing information access methods. I present two algorithms which detect such structure, and two visual display paradigms which use the results of these algorithms to show the interactions of multiple main topics, multiple subtopics, and the relations between main topics and subtopics. The first algorithm, called TextTiling , recognizes the subtopic structure of texts as dictated by their content. It uses domain-independent lexical frequency and distribution information to partition texts into multi-paragraph passages. The results are found to correspond well to reader judgments of major subtopic boundaries. The second algorithm assigns multiple main topic labels to each text, where the labels are chosen from pre-defined, intuitive category sets; the algorithm is trained on unlabeled text. A new iconic representation, called TileBars uses TextTiles to simultaneously and compactly display query term frequency, query term distribution and relative document length. This representation provides an informative alternative to ranking long texts according to their overall similarity to a query. For example, a user can choose to view those documents that have an extended discussion of one set of terms and a brief but overlapping discussion of a second set of terms. This representation also allows for relevance feedback on patterns of term distribution. TileBars display documents only in terms of words supplied in the user query. For a given retrieved text, if the query words do not correspond to its main topics, the user cannot discern in what context the query terms were used. For example, a query on contaminants may retrieve documents whose main topics relate to nuclear power, food, or oil spills. To address this issue, I describe a graphical interface, called Cougar , that displays retrieved documents in terms of interactions among their automatically-assigned main topics, thus allowing users to familiarize themselves with the topics and terminology of a text collection." ], "cite_N": [ "@cite_10" ], "mid": [ "1493108551" ] }
0
cmp-lg9709004
1575569168
Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one.
Lexical databases have been employed recently in word sense disambiguation. For example, Agirre and Rigau @cite_3 make use of a semantic distance that takes into account structural factors in WordNet for achieving good results for this task. Additionally, Resnik @cite_2 combines the use of WordNet and a text collection for a definition of a distance for disambiguating noun groupings. Although the text collection is not a training collection (in the sense of a collection of manually labelled texts for a pre-defined text processing task), his approach can be regarded as the most similar to ours in the disambiguation task. Finally, Ng and Lee @cite_11 make use of several sources of information inside a training collection (neighborhood, part of speech, morfological form, etc.) to get good results in disambiguating unrestricted text.
{ "abstract": [ "In this paper, we present a new approach for word sense disambiguation (WSD) using an exemplar-based learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense, including part of speech of neighboring words, morphological form, the unordered set of surrounding words, local collocations, and verb-object syntactic relation. We tested our WSD program, named LEXAS, on both a common data set used in previous work, as well as on a large sense-tagged corpus that we separately constructed. LEXAS achieves a higher accuracy on the common data set, and performs better than the most frequent heuristic on the highly ambiguous words in the large corpus tagged with the refined senses of WORDNET.", "This paper presents a method for the resolution of lexical ambiguity of nouns and its automatic evaluation over the Brown Corpus. The method relies on the use of the wide-coverage noun taxonomy of WordNet and the notion of conceptual distance among concepts, captured by a Conceptual Density formula developed for this purpose. This fully automatic method requires no hand coding of lexical entries, hand tagging of text nor any kind of training process. The results of the experiments have been automatically evaluated against SemCor, the sense-tagged version of the Brown Corpus.", "Word groupings useful for language processing tasks are increasingly available, as thesauri appear on-line, and as distributional word clustering techniques improve. However, for many tasks, one is interested in relationships among word senses, not words. This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns — the kind of data one finds in on-line thesauri, or as the output of distributional clustering algorithms. Disambiguation is performed with respect to WordNet senses, which are fairly fine-grained; however, the method also permits the assignment of higher-level WordNet categories rather than sense labels. The method is illustrated primarily by example, though results of a more rigorous evaluation are also presented." ], "cite_N": [ "@cite_11", "@cite_3", "@cite_2" ], "mid": [ "2157025692", "46493886", "1608874027" ] }
0
cmp-lg9706008
2951421399
This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. These methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs. Overall, the most accurate of these procedures is McQuitty's similarity analysis in combination with a high dimensional feature set.
Word--sense disambiguation has more commonly been cast as a problem in supervised learning (e.g., @cite_13 , @cite_2 , @cite_24 , @cite_6 , @cite_14 , @cite_5 , @cite_3 , @cite_16 , @cite_37 ). However, all of these methods require that manually sense tagged text be available to train the algorithm. For most domains such text is not available and is expensive to create. It seems more reasonable to assume that such text will not usually be available and attempt to pursue unsupervised approaches that rely only on the features in a text that can be automatically identified.
{ "abstract": [ "The Naive Mix is a new supervised learning algorithm that is based on a sequential method for selecting probabilistic models. The usual objective of model selection is to find a single model that adequately characterizes the data in a training sample. However, during model selection a sequence of models is generated that consists of the best-fitting model at each level of model complexity. The Naive Mix utilizes this sequence of models to define a probabilistic model which is then used as a probabilistic classifier to perform word-sense disambiguation. The models in this sequence are restricted to the class of decomposable log-linear models. This class of models offers a number of computational advantages. Experiments disambiguating twelve different words show that a Naive Mix formulated with a forward sequential search and Akaike's Information Criteria rivals established supervised learning algorithms such as decision trees (C4.5), rule induction (CN2) and nearest-neighbor classification (PEBLS).", "Most probabilistic classifiers used for word-sense disambiguation have either been based on only one contextual feature or have used a model that is simply assumed to characterize the interdependencies among multiple contextual features. In this paper, a different approach to formulating a probabilistic model is presented along with a case study of the performance of models produced in this manner for the disambiguation of the noun \"interest\". We describe a method for formulating probabilistic models that use multiple contextual features for word-sense disambiguation, without requiring untested assumptions regarding the form of the model. Using this approach, the joint distribution of all variables is described by only the most systematic variable interactions, thereby limiting the number of parameters to be estimated, supporting computational efficiency, and providing an understanding of the data.", "The three corpus-based statistical sense resolution methods studied here attempt to infer the correct sense of a polysemous word by using knowledge about patterns of word cooccurrences. The techniques were based on Bayesian decision theory, neural, networks, and content vectors as used in information retrieval. To understand these methods better, we posed a very specific problem: given a set of contexts, each containing the noun line in a known sense, construct a classifier that selects the correct sense of line for new contexts. To see how the degree of polysemy affects performance, results from three- and six-sense tasks are compared.The results demonstrate that each of the techniques is able to distinguish six senses of line with an accuracy greater than 70 . Furthermore, the response patterns of the classifiers are, for the most part, statistically indistinguishable from one another. Comparison of the two tasks suggests that the degree of difficulty involved in resolving individual senses is a greater performance factor than the degree of polysemy.", "In this paper, we present a new approach for word sense disambiguation (WSD) using an exemplar-based learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense, including part of speech of neighboring words, morphological form, the unordered set of surrounding words, local collocations, and verb-object syntactic relation. We tested our WSD program, named Lexas , on both a common data set used in previous work, as well as on a large sense-tagged corpus that we separately constructed. Lexas achieves a higher accuracy on the common data set, and performs better than the most frequent heuristic on the highly ambiguous words in the large corpus tagged with the refined senses of WordNet .", "Previous work [Gale, Church and Yarowsky, 1992] showed that with high probability a polysemous word has one sense per discourse. In this paper we show that for certain definitions of collocation, a polysemous word exhibits essentially only one sense per collocation. We test this empirical hypothesis for several definitions of sense and collocation, and discover that it holds with 90--99 accuracy for binary ambiguities. We utilize this property in a disambiguation algorithm that achieves precision of 92 using combined models of very local context.", "", "This paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from context. The algorithms tested include statistical, neural-network, decision-tree, rule-based, and case-based classification techniques. The specific problem tested involves disambiguating six senses of the word line'' using the words in the current and proceeding sentence as context. The statistical and neural-network methods perform the best on this particular problem and we discuss a potential reason for this observed difference. We also discuss the role of bias in machine learning and its importance in explaining performance differences observed on specific problems.", "Statistical models of word-sense disambiguation are often based on a small number of contextual features or on a model that is assumed to characterize the interactions among a set of features. Model selection is presented as an alternative to these approaches, where a sequential search of possible models is conducted in order to find the model that best characterizes the interactions among features. This paper expands existing model selection methodology and presents the first comparative study of model selection search strategies and evaluation criteria when applied to the problem of building probabilistic classifiers for word-sense disambiguation.", "A number of researchers in text processing have independently observed that people can consistently determine in which of several given senses a word is being used in text, simply by examining the half dozen or so words just before and just after the word in focus. The question arises whether the same task can be accomplished by mechanical means. Experimental results are presented which suggest an affirmative answer to this query. Three separate methods of discriminating English word senses are compared information-theoretically. Findings include a strong indication of the power of domain-specific content analysis of text, as opposed to domain-general approaches." ], "cite_N": [ "@cite_37", "@cite_14", "@cite_6", "@cite_3", "@cite_24", "@cite_2", "@cite_5", "@cite_16", "@cite_13" ], "mid": [ "176608537", "2952541071", "1999114220", "2949743947", "2047620598", "", "2949482574", "2072309235", "2035408139" ] }
Distinguishing Word Senses in Untagged Text
0
cmp-lg9706008
2951421399
This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. These methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs. Overall, the most accurate of these procedures is McQuitty's similarity analysis in combination with a high dimensional feature set.
A more recent bootstrapping approach is described in @cite_23 . This algorithm requires a small number of training examples to serve as a seed. There are a variety of options discussed for automatically selecting seeds; one is to identify collocations that uniquely distinguish between senses. For plant , the collocations manufacturing plant and living plant make such a distinction. Based on 106 examples of manufacturing plant and 82 examples of living plant this algorithm is able to distinguish between two senses of plant for 7,350 examples with 97 percent accuracy. Experiments with 11 other words using collocation seeds result in an average accuracy of 96 percent.
{ "abstract": [ "This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 ." ], "cite_N": [ "@cite_23" ], "mid": [ "2101210369" ] }
Distinguishing Word Senses in Untagged Text
0
cmp-lg9706008
2951421399
This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. These methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs. Overall, the most accurate of these procedures is McQuitty's similarity analysis in combination with a high dimensional feature set.
While @cite_23 does not discuss distinguishing more than 2 senses of a word, there is no immediate reason to doubt that the one sense per collocation'' rule @cite_24 would still hold for a larger number of senses. In future work we will evaluate using the one sense per collocation'' rule to seed our various methods. This may help in dealing with very skewed distributions of senses since we currently select collocations based simply on frequency.
{ "abstract": [ "Previous work [Gale, Church and Yarowsky, 1992] showed that with high probability a polysemous word has one sense per discourse. In this paper we show that for certain definitions of collocation, a polysemous word exhibits essentially only one sense per collocation. We test this empirical hypothesis for several definitions of sense and collocation, and discover that it holds with 90--99 accuracy for binary ambiguities. We utilize this property in a disambiguation algorithm that achieves precision of 92 using combined models of very local context.", "This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 ." ], "cite_N": [ "@cite_24", "@cite_23" ], "mid": [ "2047620598", "2101210369" ] }
Distinguishing Word Senses in Untagged Text
0
cmp-lg9706008
2951421399
This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. These methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs. Overall, the most accurate of these procedures is McQuitty's similarity analysis in combination with a high dimensional feature set.
Clustering has most often been applied in natural language processing as a method for inducing syntactic or semantically related groupings of words (e.g., @cite_19 , @cite_26 , @cite_25 , @cite_34 , @cite_1 , @cite_35 ).
{ "abstract": [ "Word groupings useful for language processing tasks are increasingly available, as thesauri appear on-line, and as distributional word clustering techniques improve. However, for many tasks, one is interested in relationships among word senses, not words. This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns — the kind of data one finds in on-line thesauri, or as the output of distributional clustering algorithms. Disambiguation is performed with respect to WordNet senses, which are fairly fine-grained; however, the method also permits the assignment of higher-level WordNet categories rather than sense labels. The method is illustrated primarily by example, though results of a more rigorous evaluation are also presented.", "Publisher Summary This chapter presents a detailed description of a model for a learning process, which was proposed as an account of the learning of word classes by the child. This model is related to other theories and empirical findings to describe the results of a computer simulation, which uses recorded speech of some mothers to their children as the input corpus. It is not a complete theory of language acquisition, only an intended component of such a theory. The relationship of the proposed mechanism to other component subsystems, believed to take part in language acquisition, are indicated in the chapter. A detailed comparison is made between the model and other theoretical formulations, which finds that with the exception of the mediation theory, none of the formulations is capable of accounting for the earliest stage of word class learning. The model is related to empirical findings, which demonstrates that it can account for them. Particularly, the S-P shift is a natural consequence of the memory organization in the model. Analysis of this output from the program showed that it contains grammatically appropriate classes and exhibits certain aspects known to be characteristic for the word class systems of young children.", "", "Syntactic information about a corpus of linguistic or pictorial data can be discovered by analyzing the statistics of the data. Given a corpus of text, one can measure the tendencies of pairs of words to occur in common contexts, and use these measurements to define clusters of words. Applied to basic English text, this procedure yields clusters which correspond very closely to the traditional parts of speech (nouns, verbs, articles, etc.). For FORTRAN text, the clusters obtained correspond to integers, operations, etc.; for English text regarded as a sequence of letters (or of phonemes) rather than words, the vowels and the consonants are obtained as clusters. Finally, applied to the gray shades in a digitized picture, the procedure yields slice levels which appear to be useful for figure extraction.", "We describe and experimentally evaluate a method for automatically clustering words according to their distribution in particular syntactic contexts. Deterministic annealing is used to find lowest distortion sets of clusters. As the annealing parameter increases, existing clusters become unstable and subdivide, yielding a hierarchical soft'' clustering of the data. Clusters are used as the basis for class models of word coocurrence, and the models evaluated with respect to held-out test data.", "" ], "cite_N": [ "@cite_35", "@cite_26", "@cite_1", "@cite_19", "@cite_34", "@cite_25" ], "mid": [ "1608874027", "1572705468", "", "2030720628", "2950928021", "" ] }
Distinguishing Word Senses in Untagged Text
0
cmp-lg9706008
2951421399
This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. These methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs. Overall, the most accurate of these procedures is McQuitty's similarity analysis in combination with a high dimensional feature set.
An early application of clustering to word--sense disambiguation is described in @cite_29 . There words are represented in terms of the co-occurrence statistics of four letter sequences. This representation uses 97 features to characterize a word, where each feature is a linear combination of letter four-grams formulated by a singular value decomposition of a 5000 by 5000 matrix of letter four-gram co-occurrence frequencies. The weight associated with each feature reflects all usages of the word in the sample. A context vector is formed for each occurrence of an ambiguous word by summing the vectors of the contextual words (the number of contextual words considered in the sum is unspecified). The set of context vectors for the word to be disambiguated are then clustered, and the clusters are manually sense tagged.
{ "abstract": [ "The representation of documents and queries as vectors in a high-dimensional space is well-established in information retrieval. The author proposes that the semantics of words and contexts in a text be represented as vectors. The dimensions of the space are words and the initial vectors are determined by the words occurring close to the entity to be represented, which implies that the space has several thousand dimensions (words). This makes the vector representations (which are dense) too cumbersome to use directly. Therefore, dimensionality reduction by means of a singular value decomposition is employed. The author analyzes the structure of the vector representations and applies them to word sense disambiguation and thesaurus induction. >" ], "cite_N": [ "@cite_29" ], "mid": [ "2149671658" ] }
Distinguishing Word Senses in Untagged Text
0
cmp-lg9706008
2951421399
This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. These methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs. Overall, the most accurate of these procedures is McQuitty's similarity analysis in combination with a high dimensional feature set.
The features used in this work are complex and difficult to interpret and it isn't clear that this complexity is required. @cite_23 compares his method to @cite_29 and shows that for four words the former performs significantly better in distinguishing between two senses.
{ "abstract": [ "The representation of documents and queries as vectors in a high-dimensional space is well-established in information retrieval. The author proposes that the semantics of words and contexts in a text be represented as vectors. The dimensions of the space are words and the initial vectors are determined by the words occurring close to the entity to be represented, which implies that the space has several thousand dimensions (words). This makes the vector representations (which are dense) too cumbersome to use directly. Therefore, dimensionality reduction by means of a singular value decomposition is employed. The author analyzes the structure of the vector representations and applies them to word sense disambiguation and thesaurus induction. >", "This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 ." ], "cite_N": [ "@cite_29", "@cite_23" ], "mid": [ "2149671658", "2101210369" ] }
Distinguishing Word Senses in Untagged Text
0
cmp-lg9511007
2950225692
This paper presents a new measure of semantic similarity in an IS-A taxonomy, based on the notion of information content. Experimental evaluation suggests that the measure performs encouragingly well (a correlation of r = 0.79 with a benchmark set of human similarity judgments, with an upper bound of r = 0.90 for human subjects performing the same task), and significantly better than the traditional edge counting approach (r = 0.66).
The literature on corpus-based determination of word similarity has recently been growing by leaps and bounds, and is too extensive to discuss in detail here (for a review, see @cite_1 ), but most approaches to the problem share a common assumption: semantically similar words have similar distributional behavior in a corpus. Using this assumption, it is common to treat the words that co-occur near a word as constituting features, and to compute word similarity in terms of how similar their feature sets are. As in information retrieval, the feature'' representation of a word often takes the form of a vector, with the similarity computation amounting to a computation of distance in a highly multidimensional space. Given a distance measure, it is not uncommon to derive word classes by hierarchical clustering. A difficulty with most distributional methods, however, is how the measure of similarity (or distance) is to be interpreted. Although word classes resulting from distributional clustering are often described as semantic,'' they often capture syntactic, pragmatic, or stylistic factors as well.
{ "abstract": [ "Selectional constraints are limitations on the applicability of predicates to arguments. For example, the statement \"The number two is blue\" may be syntactically well formed, but at some level it is anomalous-- scBLUE is not a predicate that can be applied to numbers. In this dissertation, I propose a new, information-theoretic account of selectional constraints. Unlike previous approaches, this proposal requires neither the identification of primitive semantic features nor the formalization of complex inferences based on world knowledge. The proposed model assumes instead that lexical items are organized in a conceptual taxonomy according to class membership, where classes are defined simply as sets--that is, extensionally, rather than in terms of explicit features or properties. Selection is formalized in terms of a probabilistic relationship between predicates and concepts: the selectional behavior of a predicate is modeled as its distributional effect on the conceptual classes of its arguments, expressed using the information-theoretic measure of relative entropy. The use of relative entropy leads to an illuminating interpretation of what selectional constraints are: the strength of a predicate's selection for an argument is identified with the quantity of information it carries about that argument. In addition to arguing that the model is empirically adequate, I explore its application to two problems. The first concerns a linguistic question: why some transitive verbs permit implicit direct objects (\"John ate @math \") and others do not (\"*John brought @math \"). It has often been observed informally that the omission of objects is connected to the ease with which the object can be inferred. I have made this observation more formal by positing a relationship between inferability and selectional constraints, and have confirmed the connection between selectional constraints and implicit objects in a set of computational experiments. Second, I have explored the practical applications of the model in resolving syntactic ambiguity. A number of authors have recently begun investigating the use of corpus-based lexical statistics in automatic parsing; the results of computational experiments using the present model suggest that often lexical relationships are better viewed in terms of underlying conceptual relationships such as selectional preference and concept similarity. Thus the information-theoretic measures proposed here can serve not only as components in a theory of selectional constraints, but also as tools for practical natural language processing." ], "cite_N": [ "@cite_1" ], "mid": [ "1516391399" ] }
Using Information Content to Evaluate Semantic Similarity in a Taxonomy
Evaluating semantic relatedness using network representations is a problem with a long history in artificial intelligence and psychology, dating back to the spreading activation approach of Quillian [1968] and Collins and Loftus [1975]. Semantic similarity represents a special case of semantic relatedness: for example, cars and gasoline would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar. Rada et al. [1989] suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonomic (is-a) links, to the exclusion of other link types; that view will also be taken here, although admittedly it excludes some potentially useful information. A natural way to evaluate semantic similarity in a taxonomy is to evaluate the distance between the nodes corresponding to the items being compared -the shorter the path from one node to another, the more similar they are. Given multiple paths, one takes the length of the shortest one [Lee et al., 1993;Rada and Bicknell, 1989;Rada et al., 1989]. A widely acknowledged problem with this approach, however, is that it relies on the notion that links in the taxonomy represent uniform distances. Unfortunately, this is difficult to define, much less to control. In real taxonomies, there is wide variability in the "distance" covered by a single taxonomic link, particularly when certain sub-taxonomies (e.g. biological categories) are much denser than others. For example, in Word-Net [Miller, 1990], a broad-coverage semantic network for English constructed by George Miller and colleagues at Princeton, it is not at all difficult to find links that cover an intuitively narrow distance (rabbit ears is-a television antenna) or an intuitively wide one (phytoplankton is-a living thing). The same kinds of examples can be found in the Collins COBUILD Dictionary [Sinclair (ed.), 1987], which identifies superordinate terms for many words (e.g. safety valve is-a valve seems a lot narrower than knitting machine is-a machine). In this paper, I describe an alternative way to evaluate semantic similarity in a taxonomy, based on the notion of information content. Like the edge counting method, it is conceptually quite simple. However, it is not sensitive to the problem of varying link distances. In addition, by combining a taxonomic structure with empirical probability estimates, it provides a way of adapting a static knowledge structure to multiple contexts. Section 2 sets up the probabilistic framework and defines the measure of semantic similarity in information-theoretic terms; Section 3 presents an evaluation of the similarity measure against human similarity judgments, using the simple edge-counting method as a baseline; and Section 4 discusses related work. Similarity and Information Content Let C be the set of concepts in an is-a taxonomy, permitting multiple inheritance. Intuitively, one key to the similarity of two concepts is the extent to which they share information in common, indicated in an is-a taxonomy by a highly specific concept that subsumes them both. The edge counting method captures this indirectly, since if the minimal path of is-a links between two nodes is long, that means it is necessary to go high in the taxonomy, to more abstract concepts, in order to find a least upper bound. For example, in WordNet, nickel and dime are both subsumed by coin, whereas the most specific superclass that nickel and credit card share is medium of exchange. Figure 1: Fragment of the WordNet taxonomy. Solid lines represent is-a links; dashed lines indicate that some intervening nodes were omitted to save space. By associating probabilities with concepts in the taxonomy, it is possible to capture the same idea, but avoiding the unreliability of edge distances. Let the taxonomy be augmented with a function p : C → [0, 1], such that for any c ∈ C, p(c) is the probability of encountering an instance of concept c. This implies that p is monotonic as one moves up the taxonomy: if c 1 is-a c 2 , then p(c 1 ) ≤ p(c 2 ). Moreover, if the taxonomy has a unique top node then its probability is 1. Following the standard argumentation of information theory [Ross, 1976], the information content of a concept c can be quantified as negative the log likelihood, − log p(c). Notice that quantifying information content in this way makes intuitive sense in this setting: as probability increases, informativeness decreases, so the more abstract a concept, the lower its information content. Moreover, if there is a unique top concept, its information content is 0. This quantitative characterization of information provides a new way to measure semantic similarity. The more information two concepts share in common, the more similar they are, and the information shared by two concepts is indicated by the information content of the concepts that subsume them in the taxonomy. Formally, define sim(c 1 , c 2 ) = max c ∈ S(c1, c2) [− log p(c)] ,(1) where S(c 1 , c 2 ) is the set of concepts that subsume both c 1 and c 2 . Notice that although similarity is computed by considering all upper bounds for the two concepts, the information measure has the effect of identifying minimal upper bounds, since no class is less informative than its superordinates. For example, in Figure 1, coin, cash, etc. are all members of S(nickel, dime), but the concept that is structurally the minimal upper bound, coin, will also be the most informative. This can make a difference in cases of multiple inheritance; for example, in Figure 2, metal and chemical element are not structurally distinguishable as upper bounds of nickel' and gold', but their information content may in fact be quite different. In practice, one often needs to measure word similardimes are both small, round, metallic, and so on. These features are captured implicitly by the taxonomy in categorizing nickel and dime as subordinates of coin. ity, rather than concept similarity. Using s(w) to represent the set of concepts in the taxonomy that are senses of word w, define sim(w 1 , w 2 ) = max c1, c2 [sim(c 1 , c 2 )] ,(2) where c 1 ranges over s(w 1 ) and c 2 ranges over s(w 2 ). This is consistent with Rada et al.'s [1989] treatment of "disjunctive concepts" using edge counting: they define the distance between two disjunctive sets of concepts as the minimum path length from any element of the first set to any element of the second. Here, the word similarity is judged by taking the maximal information content over all concepts of which both words could be an instance. For example, Figure 2 illustrates how the similarity of words nickel and gold would be computed: the information content would be computed for all classes subsuming any pair in the cross product of {nickel,nickel'} and {gold,gold'}, and the information content of the most informative class used to quantify the similarity of the two words. Evaluation Implementation The work reported here used WordNet's (50,000-node) taxonomy of concepts represented by nouns (and compound nominals) in English. 2 Frequencies of concepts in the taxonomy were estimated using noun frequencies from the Brown Corpus of American English [Francis and Kučera, 1982], a large (1,000,000 word) collection of text across genres ranging from news articles to science fiction. Each noun that occurred in the corpus was counted as an occurrence of each taxonomic class containing it. 3 For example, in Figure 1, an occurrence of the noun dime would be counted toward the frequency of dime, coin, and so forth. Formally, freq(c) = n∈words(c) count(n),(3) where words(c) is the set of words subsumed by concept c. Concept probabilities were computed simply as relative frequency: p(c) = freq(c) N ,(4) where N was the total number of nouns observed (excluding those not subsumed by any WordNet class, of course). Task Although there is no standard way to evaluate computational measures of semantic similarity, one reasonable way to judge would seem to be agreement with human similarity ratings. This can be assessed by using a computational similarity measure to rate the similarity of a set of word pairs, and looking at how well its ratings correlate with human ratings of the same pairs. An experiment by Miller and Charles [1991] provided appropriate human subject data for the task. In their study, 38 undergraduate subjects were given 30 pairs of nouns that were chosen to cover high, intermediate, and low levels of similarity (as determined using a previous study [Rubenstein and Goodenough, 1965]), and asked to rate "similarity of meaning" for each pair on a scale from 0 (no similarity) to 4 (perfect synonymy). The average rating for each pair thus represents a good estimate of how similar the two words are, according to human judgments. In order to get a baseline for comparison, I replicated Miller and Charles's experiment, giving ten subjects the same 30 noun pairs. The subjects were all computer science graduate students or postdocs at the University of Pennsylvania, and the instructions were exactly the same as used by Miller and Charles, the main difference being that in this replication the subjects completed the questionnaire by electronic mail (though they were instructed to complete the whole thing in a single uninterrupted sitting). Five subjects received the list of word pairs in a random order, and the other five received the list in the reverse order. The correlation between the Miller and Charles mean ratings and the mean ratings in my replication was .96, quite close to the .97 correlation that Miller and Charles obtained between their results and the ratings determined by the earlier study. For each subject in my replication, I computed how well his or her ratings correlated with the Miller and Charles ratings. The average correlation over the 10 subjects was r = 0.8848, with a standard deviation of 0.08. 4 This value represents an upper bound on what one should expect from a computational attempt to perform the same task. For purposes of evaluation, three computational similarity measures were used. The first is the similarity measurement using information content proposed in the previous section. The second is a variant on the edge counting method, converting it from distance to similarity by subtracting the path length from the maximum possible path length: sim edge (w 1 , w 2 ) = (2 × max)− min c1, c2 len(c 1 , c 2 ) (5) where c 1 ranges over s(w 1 ), c 2 ranges over s(w 2 ), max is the maximum depth of the taxonomy, and len(c 1 , c 2 ) Similarity method Correlation Human judgments (replication) r = .9015 Information content r = .7911 Probability r = .6671 Edge counting r = .6645 is the length of the shortest path from c 1 to c 2 . (Recall that s(w) denotes the set of concepts in the taxonomy that represent senses of word w.) Note that the conversion from a distance to a similarity can be viewed as an expository convenience, and does not affect the evaluation: although the sign of the correlation coefficient changes from positive to negative, its magnitude turns out to be just the same regardless of whether or not the minimum path length is subtracted from (2 × max). The third point of comparison is a measure that simply uses the probability of a concept, rather than the information content: sim p(c) (c 1 , c 2 ) = max c ∈ S(c1, c2) [1 − p(c)] (6) sim p(c) (w 1 , w 2 ) = max c1, c2 sim p(c) (c 1 , c 2 ) ,(7) where c 1 ranges over s(w 1 ) and c 2 ranges over s(w 2 ) in (7). Again, the difference between maximizing 1−p(c) and minimizing p(c) turns out not to affect the magnitude of the correlation. It simply ensures that the value can be interpreted as a similarity value, with high values indicating similar words. Table 1 summarizes the experimental results, giving the correlation between the similarity ratings and the mean ratings reported by Miller and Charles. Note that, owing to a noun missing from the WordNet taxonomy, it was only possible to obtain computational similarity ratings for 28 of the 30 noun pairs; hence the proper point of comparison for human judgments is not the correlation over all 30 items (r = .8848), but rather the correlation over the 28 included pairs (r = .9015). The similarity ratings by item are given in Table 3. Results Discussion The experimental results in the previous section suggest that measuring semantic similarity using information content provides quite reasonable results, significantly better than the traditional method of simply counting the number of intervening is-a links. The measure is not without its problems, however. One problem is that, like simple edge counting, the measure sometimes produces spuriously high similarity measures for words on the basis of inappropriate word senses. For example, Table 2 shows the word similarity for several words with tobacco. Tobacco and alcohol are similar, both being drugs, and tobacco and sugar are less similar, though not entirely dissimilar, since both can be classified as substances. The problem arises, however, in the similarity rating for tobacco with horse: the word n1 n2 sim(n1,n2) class tobacco alcohol 7.63 drug tobacco sugar 3.56 substance tobacco horse 8.26 narcotic Table 2: Similarity with tobacco computed by maximizing information content horse can be used as a slang term for heroin, and as a result information-based similarity is maximized, and path length minimized, when the two words are both categorized as narcotics. This is contrary to intuition. Cases like this are probably relatively rare. However, the example illustrates a more general concern: in measuring similarity between words, it is really the relationship among word senses that matters, and a similarity measure should be able to take this into account. In the absence of a reliable algorithm for choosing the appropriate word senses, the most straightforward way to do so in the information-based setting is to consider all concepts to which both nouns belong rather than taking just the single maximally informative class. This suggests redefining similarity as follows: sim(c 1 , c 2 ) = i α(c i )[− log p(c i )],(8) where {c i } is the set of concepts dominating both c 1 and c 2 , as before, and i α(c i ) = 1. This measure of similarity takes more information into account than the previous one: rather than relying on the single concept with maximum information content, it allows each class to contribute information content according to the value of α(c i ). Intuitively, these α values measure relevancefor example, α(narcotic) might be low in general usage but high in the context of a newspaper article about drug dealers. In work on resolving syntactic ambiguity using semantic information [Resnik, 1993b], I have found that local syntactic information can be used successfully to set values for the α. Conclusions This paper has presented a new measure of semantic similarity in an is-a taxonomy, based on the notion of information content. Experimental evaluation was performed using a large, independently constructed corpus, an independently constructed taxonomy, and previously existing human subject data. The results suggest that the measure performs encouragingly well (a correlation of r = 0.79 with a benchmark set of human similarity judgments, against an upper bound of r = 0.90 for human subjects performing the same task), and significantly better than the traditional edge counting approach (r = 0.66). In ongoing work, I am currently exploring the application of taxonomically-based semantic similarity in the disambiguation of word senses [Resnik, 1995]. The idea behind the approach is that when polysemous words appear together, the appropriate word senses to assign are often those that share elements of meaning. Thus doctor can refer to either a Ph.D. or an M.D., and nurse can signify either a health professional or someone who takes care of small children; but when doctor and nurse are seen together, the Ph.D. sense and the childcare sense go by the wayside. In a widely known paper, Lesk [1986] exploits dictionary definitions to identify shared elements of meaning -for example, in the Collins COBUILD Dictionary [Sinclair (ed.), 1987], the word ill can be found in the definitions of the correct senses. More recently, Sussna [1993] has explored using similarity of word senses based on WordNet for the same purpose. The work I am pursuing is similar in spirit to Sussna's approach, although the disambiguation algorithm and the similarity measure differ substantially.
2,749
cmp-lg9702008
2950202165
Statistical models of word-sense disambiguation are often based on a small number of contextual features or on a model that is assumed to characterize the interactions among a set of features. Model selection is presented as an alternative to these approaches, where a sequential search of possible models is conducted in order to find the model that best characterizes the interactions among features. This paper expands existing model selection methodology and presents the first comparative study of model selection search strategies and evaluation criteria when applied to the problem of building probabilistic classifiers for word-sense disambiguation.
Statistical analysis of NLP data has often been limited to the application of standard models, such as n-gram (Markov chain) models and the Naive Bayes model. While n-grams perform well in part--of--speech tagging and speech processing, they require a fixed interdependency structure that is inappropriate for the broad class of contextual features used in word--sense disambiguation. However, the Naive Bayes classifier has been found to perform well for word--sense disambiguation both here and in a variety of other works (e.g., @cite_18 , @cite_10 , @cite_2 , and @cite_1 ).
{ "abstract": [ "This paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from context. The algorithms tested include statistical, neural-network, decision-tree, rule-based, and case-based classification techniques. The specific problem tested involves disambiguating six senses of the word line'' using the words in the current and proceeding sentence as context. The statistical and neural-network methods perform the best on this particular problem and we discuss a potential reason for this observed difference. We also discuss the role of bias in machine learning and its importance in explaining performance differences observed on specific problems.", "", "Word sense disambiguation has been recognized as a major problem in natural language processing research for over forty years. Both quantitive and qualitative methods have been tried, but much of this work has been stymied by difficulties in acquiring appropriate lexical resources. The availability of this testing and training material has enabled us to develop quantitative disambiguation methods that achieve 92 accuracy in discriminating between two very distinct senses of a noun. In the training phase, we collect a number of instances of each sense of the polysemous noun. Then in the testing phase, we are given a new instance of the noun, and are asked to assign the instance to one of the senses. We attempt to answer this question by comparing the context of the unknown instance with contexts of known instances using a Bayesian argument that has been applied successfully in related tasks such as author identification and information retrieval. The proposed method is probably most appropriate for those aspects of sense disambiguation that are closest to the information retrieval task. In particular, the proposed method was designed to disambiguate senses that are usually associated with different topics.", "The three corpus-based statistical sense resolution methods studied here attempt to infer the correct sense of a polysemous word by using knowledge about patterns of word cooccurrences. The techniques were based on Bayesian decision theory, neural, networks, and content vectors as used in information retrieval. To understand these methods better, we posed a very specific problem: given a set of contexts, each containing the noun line in a known sense, construct a classifier that selects the correct sense of line for new contexts. To see how the degree of polysemy affects performance, results from three- and six-sense tasks are compared.The results demonstrate that each of the techniques is able to distinguish six senses of line with an accuracy greater than 70 . Furthermore, the response patterns of the classifiers are, for the most part, statistically indistinguishable from one another. Comparison of the two tasks suggests that the degree of difficulty involved in resolving individual senses is a greater performance factor than the degree of polysemy." ], "cite_N": [ "@cite_1", "@cite_18", "@cite_10", "@cite_2" ], "mid": [ "2949482574", "", "1977182536", "1999114220" ] }
Sequential Model Selection for Word Sense Disambiguation *
0
cmp-lg9702008
2950202165
Statistical models of word-sense disambiguation are often based on a small number of contextual features or on a model that is assumed to characterize the interactions among a set of features. Model selection is presented as an alternative to these approaches, where a sequential search of possible models is conducted in order to find the model that best characterizes the interactions among features. This paper expands existing model selection methodology and presents the first comparative study of model selection search strategies and evaluation criteria when applied to the problem of building probabilistic classifiers for word-sense disambiguation.
In order to utilize models with more complicated interactions among feature variables, @cite_8 introduce the use of sequential model selection and decomposable models for word--sense disambiguation. They recommended a model selection procedure using BSS and the exact conditional test in combination with a test for model predictive power. In their procedure, the exact conditional test was used to guide the generation of new models and the test of model predictive power was used to select the final model from among those generated during the search.
{ "abstract": [ "Most probabilistic classifiers used for word-sense disambiguation have either been based on only one contextual feature or have used a model that is simply assumed to characterize the interdependencies among multiple contextual features. In this paper, a different approach to formulating a probabilistic model is presented along with a case study of the performance of models produced in this manner for the disambiguation of the noun \"interest\". We describe a method for formulating probabilistic models that use multiple contextual features for word-sense disambiguation, without requiring untested assumptions regarding the form of the model. Using this approach, the joint distribution of all variables is described by only the most systematic variable interactions, thereby limiting the number of parameters to be estimated, supporting computational efficiency, and providing an understanding of the data." ], "cite_N": [ "@cite_8" ], "mid": [ "2952541071" ] }
Sequential Model Selection for Word Sense Disambiguation *
0
cmp-lg9702008
2950202165
Statistical models of word-sense disambiguation are often based on a small number of contextual features or on a model that is assumed to characterize the interactions among a set of features. Model selection is presented as an alternative to these approaches, where a sequential search of possible models is conducted in order to find the model that best characterizes the interactions among features. This paper expands existing model selection methodology and presents the first comparative study of model selection search strategies and evaluation criteria when applied to the problem of building probabilistic classifiers for word-sense disambiguation.
Alternative probabilistic approaches have involved using a single contextual feature to perform disambiguation (e.g., @cite_17 , @cite_20 , and @cite_14 present techniques for identifying the optimal feature to use in disambiguation). Maximum Entropy models have been used to express the interactions among multiple feature variables (e.g., @cite_6 ), but within this framework no systematic study of interactions has been proposed. Decision tree induction has been applied to word-sense disambiguation (e.g. @cite_7 and @cite_1 ) but, while it is a type of model selection, the models are not parametric.
{ "abstract": [ "Previous work [Gale, Church and Yarowsky, 1992] showed that with high probability a polysemous word has one sense per discourse. In this paper we show that for certain definitions of collocation, a polysemous word exhibits essentially only one sense per collocation. We test this empirical hypothesis for several definitions of sense and collocation, and discover that it holds with 90--99 accuracy for binary ambiguities. We utilize this property in a disambiguation algorithm that achieves precision of 92 using combined models of very local context.", "A number of researchers in text processing have independently observed that people can consistently determine in which of several given senses a word is being used in text, simply by examining the half dozen or so words just before and just after the word in focus. The question arises whether the same task can be accomplished by mechanical means. Experimental results are presented which suggest an affirmative answer to this query. Three separate methods of discriminating English word senses are compared information-theoretically. Findings include a strong indication of the power of domain-specific content analysis of text, as opposed to domain-general approaches.", "This paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from context. The algorithms tested include statistical, neural-network, decision-tree, rule-based, and case-based classification techniques. The specific problem tested involves disambiguating six senses of the word line'' using the words in the current and proceeding sentence as context. The statistical and neural-network methods perform the best on this particular problem and we discuss a potential reason for this observed difference. We also discuss the role of bias in machine learning and its importance in explaining performance differences observed on specific problems.", "The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only recently, however, have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition. In this paper, we describe a method for statistical modeling based on maximum entropy. We present a maximum-likelihood approach for automatically constructing maximum entropy models and describe how to implement this approach efficiently, using as examples several problems in natural language processing.", "This paper presents a new approach for resolving lexical ambiguities in one language using statistical data on lexical relations in another language. This approach exploits the differences between mappings of words to senses in different languages. We concentrate on the problem of target word selection in machine translation, for which the approach is directly applicable, and employ a statistical model for the selection mechanism. The model was evaluated using two sets of Hebrew and German examples and was found to be very useful for disambiguation.", "We describe a statistical technique for assigning senses to words. An instance of a word is assigned a sense by asking a question about the context in which the word appears. The question is constructed to have high mutual information with the translation of that instance in another language. When we incorporated this method of assigning senses into our statistical machine translation system, the error rate of the system decreased by thirteen percent." ], "cite_N": [ "@cite_14", "@cite_7", "@cite_1", "@cite_6", "@cite_20", "@cite_17" ], "mid": [ "2047620598", "2035408139", "2949482574", "2096175520", "2137638032", "2129139611" ] }
Sequential Model Selection for Word Sense Disambiguation *
0
cmp-lg9607014
2950224005
In this paper, we define the notion of a preventative expression and discuss a corpus study of such expressions in instructional text. We discuss our coding schema, which takes into account both form and function features, and present measures of inter-coder reliability for those features. We then discuss the correlations that exist between the function and the form features.
In computational linguistics, on the other hand, positive imperatives have been extensively investigated, both from the point of view of interpretation @cite_13 @cite_8 @cite_6 @cite_1 and generation @cite_9 @cite_10 @cite_4 @cite_7 . Little work, however, has been directed at negative imperatives. (for exceptions see the work of in interpretation and of in generation).
{ "abstract": [ "Currently, computational linguists and cognitive scientists working in the area of discourse and dialogue argue that their subjective judgments are reliable using several different statistics, none of which are easily interpretable or comparable to each other. Meanwhile, researchers in content analysis have already experienced the same difficulties and come up with a solution in the kappa statistic. We discuss what is wrong with reliability measures as they are currently used for discourse and dialogue work in computational linguistics and cognitive science, and argue that we would be better off as a field adopting techniques from content analysis.", "", "This book offers a unique synthesis of past and current work on the structure, meaning, and use of negation and negative expressions, a topic that has engaged thinkers from Aristotle and the Buddha to Freud and Chomsky. Horn's masterful study melds a review of scholarship in philosophy, psychology, and linguistics with original research, providing a full picture of negation in natural language and thought; this new edition adds a comprehensive preface and bibliography, surveying research since the book's original publication.", "", "This paper addresses the problem of designing a system that accepts a plan structure of the sort generated by AI planning programs and produces natural language text explaining how to execute the plan. We describe a system that generates text from plans produced by the NONLIN planner (Tate 1976).The results of our system are promising, but the texts still lack much of the smoothness of human-generated text. This is partly because, although the domain of plans seems a priori to provide rich structure that a natural language generator can use, in practice a plan that is generated without the production of explanations in mind rarely contains the kinds of information that would yield an interesting natural language account. For instance, the hierarchical organization assigned to a plan is liable to reflect more a programmer's approach to generating a class of plans efficiently than the way that a human would naturally \"chunk\" the relevant actions. Such problems are, of course, similar to those that Swartout (1983) encountered with expert systems. In addition, AI planners have a restricted view of the world that is hard to match up with the normal semantics of natural language expressions. Thus constructs that are primitive to the planner may be only clumsily or misleadingly expressed in natural language, and the range of possible natural language constructs may be artificially limited by the shallowness of the planner's representations.", "Human agents are extremely flexible in dealing with Natural Language instructions. I argue that most instructions don't exactly mirror the agent's knowledge, but are understood by accommodating them in the context of the general plan the agent is considering; the accommodation process is guided by the goal(s) that the agent is trying to achieve. Therefore a NL system which interprets instructions must be able to recognize and or hypothesize goals; it must make use of a flexible knowledge representation system, able to support the specialized inferences necessary to deal with input action descriptions that do not exactly match the stored knowledge. The data that support my claim are Purpose Clauses (PCs), infinitival constructions as in @math , and Negative Imperatives. I present a pragmatic analysis of both PCs and Negative Imperatives. Furthermore, I analyze the computational consequences of PCs, in terms of the relations between actions PCs express, and of the inferences an agent has to perform to understand PCs. I propose an action representation formalism that provides the required flexibility. It has two components. The Terminological Box (TBox) encodes linguistic knowledge about actions, and is expressed by means of the hybrid system CLASSIC. To guarantee that the primitives of the representation are linguistically motivated, I derive them from Jackendoff's work on Conceptual Structures. The Action Library encodes planning knowledge about actions. The action terms used in the plans are those defined in the TBox. Finally, I present an algorithm that implements inferences necessary to understand @math , and supported by the formalism I propose. In particular, I show how the TBox classifier is used to infer whether @math can be assumed to match one of the substeps in the plan for @math , and how expectations necessary for the match to hold are computed.", "This thesis describes Sonja, a system which uses instructions in the course of visually-guided activity. The thesis explores an integration of research in vision, activity, and natural language pragmatics. Sonja''s visual system demonstrates the use of several intermediate visual processes, particularly visual search and routines, previously proposed on psychophysical grounds. The computations Sonja performs are compatible with the constraints imposed by neuroscientifically plausible hardware. Although Sonja can operate autonomously, it can also make flexible use of instructions provided by a human advisor. The system grounds its understanding of these instructions in perception and action.", "" ], "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_10", "@cite_9", "@cite_1", "@cite_6", "@cite_13" ], "mid": [ "2153804780", "", "2052154979", "146540459", "1564910013", "1507377794", "1592377142", "" ] }
A Corpus Study of Negative Imperatives in Natural Language Instructions *
While interpreting instructions, an agent is continually faced with a number of possible actions to execute, the majority of which are not appropriate for the situation at hand. An instructor is therefore required not only to prescribe the appropriate actions to the reader, but also to prevent the reader from executing the inappropriate and potentially dangerous alternatives. The first task, which is commonly achieved by giving simple imperative commands and statements of purpose, has received considerable attention in both the interpretation (e.g., (Di Eugenio, 1993)) and the generation communities (e.g., (Vander Linden and Martin, 1995)). The second, achieved through the use of preventative expressions, has received considerably less attention. Such expressions can indicate actions that the agent should not perform, or manners of execution that the agent should not adopt. An agent may be told, for example, "Do not enter" or "Take care not to push too hard". Both of the examples just given involve negation ("do not " and "take care not "). Although this is not strictly necessary for preventative expressions (e.g., one might say "stay out" rather than "do not enter"), we will focus on the use of negative forms in this paper. We will use the following categorisation of explicit preventative expressions: • negative imperatives proper (termed DONT imperatives). These are characterised by the negative auxiliary do not or don't. (1) Your sheet vinyl floor may be vinyl asbestos, which is no longer on the market. Don't sand it or tear it up because this will put dangerous asbestos fibers into the air. • other preventative imperatives (termed neg-TC imperatives). These include take care and be careful followed by a negative infinitival complement, as in the following examples: (2) To book the strip, fold the bottom third or more of the strip over the middle of the panel, pasted sides together, taking care not to crease the wallpaper sharply at the fold. (3) If your plans call for replacing the wood base molding with vinyl cove molding, be careful not to damage the walls as you remove the wood base. The question of interest for us is under which conditions one or the other of the surface forms is chosen. We are currently using this information to drive the generation of warning messages in the drafter system (Vander Linden and Di Eugenio, 1996). We will start by discussing previous work on negative imperatives, and by presenting an hypothesis to be explored. We will then describe the nature of our corpus and our coding schema, detailing the results of our inter-coder reliability tests. Finally, we will describe the results of our analysis of the correlation between function and form features. A Priori Hypotheses Di Eugenio (1993) put forward the following hypothesis concerning the realization of preventative expressions. In this discussion, S refers to the instructor (speaker / writer) who is referred to with feminine pronouns, and H to the agent (hearer / reader), referred to with masculine pronouns: • DONT imperatives. A DONT imperative is used when S expects H to be aware of a certain choice point, but to be likely to choose the wrong alternative among many -possibly infinite -ones, as in: (4) Dust-mop or vacuum your parquet floor as you would carpeting. Do not scrub or wet-mop the parquet. Here, H is aware of the choice of various cleaning methods, but may choose an inappropriate one (i.e., scrubbing or wet-mopping). • Neg-TC imperatives. In general, neg-TC imperatives are used when S expects H to overlook a certain choice point; such choice point may be identified through a possible side effect that the wrong choice will cause. It may, for example, be used when H might execute an action in an undesirable way. Consider: (5) To make a piercing cut, first drill a hole in the waste stock on the interior of the pattern. If you want to save the waste stock for later use, drill the hole near a corner in the pattern. Be careful not to drill through the pattern line. Here, H has some choices as regards the exact position where to drill, so S constrains him by saying Be careful not to drill through the pattern line. So the hypothesis is that H's awareness of the presence of a certain choice point in executing a set of instructions affects the choice of one preventative expression over another. This hypothesis, however, was based on a small corpus and on intuitions. In this paper we present a more systematic analysis. Corpus and coding Our interest is in finding correlations between features related to the function of a preventative expression, and those related to the form of that expression. Functional features are the semantic features of the message being expressed and the pragmatic features of the context of communication. The form feature is the grammatical structure of the expression. In this section we will start with a discussion of our corpus, and then detail the function and form features that we have coded. We will conclude with a discussion of the inter-coder reliability of our coding. Corpus The raw instructional corpus from which we take all the examples we have coded has been collected opportunistically off the internet and from other sources. It is approximately 4 MB in size and is made entirely of written English instructional texts. The corpus includes a collection of recipes (1.7 MB), two complete do-it-yourself manuals (RD, 1991;McGowan and R. DuBern, 1991) (1.2 MB) 1 , a set of computer games instructions, the Sun Open-windows on-line instructions, and a collection of administrative application forms. As a collection, these texts are the result of a variety of authors working in a variety of instructional contexts. We broke the corpus texts into expressions using a simple sentence breaking algorithm and then collected the negative imperatives by probing for expressions that contain the grammatical forms we were interested in (e.g., expressions containing phrases such as "don't" and "take care"). The first row in Table 1 shows the frequency of occurrence for each of the grammatical forms we probed for. These grammatical forms, 1175 occurrences in all, constitute 2.5% of the expressions in the full corpus. We then filtered the results of this probe in two ways: 1. When the probe returned more than 100 examples for a grammatical form, we randomly selected around 100 of those returned. We took all the examples for those forms that returned fewer than 100 examples. The number of examples that resulted is shown in row 2 of Table 1 (labelled "raw sample"). 2. We removed those examples that, although they contained the desired lexical string, did not constitute negative imperatives. This pruning was done when the example was not an imperative (e.g., "If you don't see the Mail Tool window . . . ") and when the example was not negative (e.g., "Make sure to lock the bit tightly in the collar."). The number of examples which resulted is shown in row 3 of Table 1 (labelled "final coding"). Note that the majority of the "make sure" examples were removed here because they were ensurative. As shown in Table 1, the final corpus sample is made up of 239 examples, all of which have been coded for the features to be discussed in the next two sections. Form Because of its syntactic nature, the form feature coding was very robust. The possible feature values were: DONT -for the do not and don't forms discussed above; and neg-TC -for take care, make sure, ensure, be careful , be sure, be certain expressions with negative arguments. Function Features The design of semantic/pragmatic features usually requires a series of iterations and modifications. We will discuss our schema, explaining the reasons behind our choices when necessary. We coded for two function features: intentionality and awareness, which we will illustrate in turn using α to refer to the negated action. The conception of these features was inspired by the hypothesis put forward in Section 3, as we will briefly discuss below. Intentionality This feature encodes whether the agent consciously adopts the intention of performing α. We settled on two values, CON(scious) and UNC(onscious). As the names of these values may be slightly misleading, we discuss them in detail here: CON is used to code situations where S expects H to intend to perform α. This often happens when S expects H to be aware that α is an alternative to the β H should perform, and to consider them equivalent, while S knows that this is not the case. Consider Ex. (4) above. If the negative imperative Do not scrub or wet-mop the parquet were not included, the agent might have chosen to scrub or wet-mop because these actions may result in deeper cleaning, and because he was unaware of the bad consequences. UNC is perhaps a less felicitous name because we certainly don't mean that the agent may perform actions while being unconscious! Rather, we mean that the agent doesn't realise that there is a choice point It is used in two situations: when α is totally accidental, as in: (6) Be careful not to burn the garlic. In the domain of cooking, no agent would consciously burn the garlic. Alternatively, an example is coded as UNC when α has to be intentionally planned for, but the agent may not take into account a crucial feature of α, as in: (7) Don't charge -or store -a tool where the temperature is below 40 degrees F or above 105 degrees. While clearly the agent will have to intend to perform charging or storing a tool , he is likely to overlook, at least in S's conception, that temperature could have a negative impact on the results of such actions. Awareness This binary feature captures whether the agent is AWare or UNAWare that the consequences of α are bad. These features are detailed now: DONT Neg-TC don't do not take care make sure be careful be sure Raw Grep 417 385 21 229 52 71 Raw Sample 100 99 21 104 52 71 Final Coding 78 89 17 3 46 6 167 72 Table 1: Distribution of negative imperatives UNAW is used when H is perceived to be unaware that α is bad. For example, Example (7) ("Don't charge -or store -a tool where the temperature is below 40 degrees F or above 105 degrees") is coded as UNAW because it is unlikely that the reader will know about this restriction; AW is used when H is aware that α is bad. Example (6) ("Be careful not to burn the garlic") is coded as AW because the reader is well aware that burning things when cooking them is bad. Inter-coder reliability Each author independently coded each of the features for all the examples in the sample. The percentage agreement is 76.1% for intentionality and 92.5% for awareness. Until very recently, these values would most likely have been accepted as a basis for further analysis. To support a more rigorous analysis, however, we have followed Carletta's suggestion (1996) of using the K coefficient (Siegel and Castellan, 1988) as a measure of coder agreement. This statistic not only measures agreement, but also factors out chance agreement, and is used for nominal (or categorical) scales. In nominal scales, there is no relation between the different categories, and classification induces equivalence classes on the set of classified objects. In our coding schema, each feature determines a nominal scale on its own. Thus, we report the values of the K statistics for each feature we coded for. If P (A) is the proportion of times the coders agree, and P (E) is the proportion of times that coders are expected to agree by chance, K is computed as follows: K = P (A) − P (E) 1 − P (E) Thus, if there is total agreement among the coders, K will be 1; if there is no agreement other than chance agreement, K will be 0. There are various ways of computing P (E); according to Siegel and Castellan (1988) Table 3: Kappa values for function features agree on the following formula, which we also adopted: P (E) = m j=1 p 2 j where m is the number of categories, and p j is the proportion of objects assigned to category j. The mere fact that K may have a value k greater than zero is not sufficient to draw any conclusion, though, as it must be established whether k is significantly different from zero. While Siegel and Castellan (1988, p.289) point out that it is possible to check the significance of K when the number of objects is large, Rietveld and van Hout (1993) suggest a much simpler correlation between K values and inter-coder reliability, shown in Figure 2. For the form feature, the Kappa value is 1.0, which is not surprising given its syntactic nature. The function features, which are more subjective in nature, engender more disagreement among coders, as shown by the K values in Table 3. According to Rietveld and van Hout, the awareness feature shows "substantial" agreement and the intentionality feature shows "moderate" agreement. Analysis In our analysis, we have attempted to discover and to empirically verify correlations between the feature χ 2 significance level intentionality 51.4 0.001 awareness 56.9 0.001 Table 4: χ 2 statistic and significance levels function features and the form feature. We did this by computing χ 2 statistics for the various functional features as they compared with form distinction between DONT and neg-TC imperatives. Given that the features were all two-valued we were able to use the following definition of the statistic, taken from (Siegel and Castellan, 1988): χ 2 = N (|AD − BC| − N 2 ) 2 (A + B)(C + D)(A + C)(B + D) Here N is the total number of examples and A-D are the values of the elements of the 2×2 contingency table (see Figure 5). The χ 2 statistic is appropriate for the correlation of two independent samples of nominally coded data, and this particular definition of it is in line with Siegel's recommendations for 2×2 contingency tables in which N > 40 (Siegel and Castellan, 1988, page 123). Concerning the assumption of independence, while it is, in fact, possible that some of the examples may have been written by a single author, the corpus was written by a considerable number of authors. Even the larger works (e.g., the cookbooks and the do-it-yourself manuals) are collections of the work of multiple authors. We felt it acceptable, therefore, to view the examples as independent and use the χ 2 statistic. To compute χ 2 for the coded examples in our corpus, we collected all the examples for which we agreed on both of the functional features (i.e., intentionality and awareness). Of the 239 total examples, 165 met this criteria. Table 4 lists the χ 2 statistic and its related level of significance for each of the features. The significance levels for intentionality and awareness indicate that the features do correlate with the forms. We will focus on these features in the remainder of this section. The 2×2 contingency table from which the intentionality value was derived is shown in Table 5. This table shows the frequencies of examples marked as conscious or unconscious in relation to those marked as DONT and neg-TC. A strong tendency is indicated to prevent actions the reader is likely to consciously execute using the DONT form. Note that the In Section 3 we speculated that the hearer's awareness of the choice point, or more accurately, the writer's view of the hearer's awareness, would affect the appropriate form of expression of the preventative expression. In our coding, awareness was then shifted to awareness of bad consequences rather than of choices per se. However, the basic intuition that awareness plays a role in the choice of surface form is supported, as the contingency table for this feature in Table 6 shows. It indicates a strong preference for the use of the DONT form when the reader is presumed to be unaware of the negative consequences of the action to be prevented, the reverse being true for the use of the neg-TC form. The results of this analysis, therefore, demonstrate that the intentionality and awareness features do co-vary with grammatical form, and in particular, support a form of the hypothesis put forward in Section 3. Application We have successfully used the correlations discussed here to support the generation of warning messages in the drafter project (Paris and Vander Linden, 1996). drafter is a technical authoring support tool which generates instructions for graphical interfaces. It allows its users to specify a procedure to be expressed in instructional form, and in particular, allows them to specify actions which must be prevented at the appropriate points in the procedure. At generation time, then, drafter must be able to select the appropriate grammatical form for the preventative expression. We have used the correlations discussed in this paper to build the text planning rules required to generate negative imperatives. This is discussed in more detail elsewhere (Vander Linden and Di Eugenio, 1996), but in short, we input our coded examples to Quinlan's C4.5 learning algorithm (Quinlan, 1993), which induces a decision tree mapping from the functional features to the appropriate form. Currently, these features are set manually by the user as they are too difficult to derive automatically. Conclusions This paper has detailed a corpus study of preventative expressions in instructional text. The study highlighted correlations between functional features and grammatical form, the sort of correlations useful in both interpretation and generation. Studies such as this have been done before in Computational Linguistics, although not, to our knowledge, on preventative expressions. The point we want to emphasise here is a methodological one. Only recently have studies been making use of more rigorous statistical measures of accuracy and reproducibility used here. We have found the Kappa statistic critical in the definition of the features we coded (see Section 4.4). We intend to augment and refine the list of features discussed here and hope to use them in understanding applications as well as generation applications. We also intend to extend the analysis to ensurative expressions.
3,031
1908.11078
2971016516
Hashing is promising for large-scale information retrieval tasks thanks to the efficiency of distance evaluation between binary codes. Generative hashing is often used to generate hashing codes in an unsupervised way. However, existing generative hashing methods only considered the use of simple priors, like Gaussian and Bernoulli priors, which limits these methods to further improve their performance. In this paper, two mixture-prior generative models are proposed, under the objective to produce high-quality hashing codes for documents. Specifically, a Gaussian mixture prior is first imposed onto the variational auto-encoder (VAE), followed by a separate step to cast the continuous latent representation of VAE into binary code. To avoid the performance loss caused by the separate casting, a model using a Bernoulli mixture prior is further developed, in which an end-to-end training is admitted by resorting to the straight-through (ST) discrete gradient estimator. Experimental results on several benchmark datasets demonstrate that the proposed methods, especially the one using Bernoulli mixture priors, consistently outperform existing ones by a substantial margin.
Recently, VDSH @cite_28 proposed to use a VAE to learn the latent representations of documents and then use a separate stage to cast the continuous representations into binary codes. While fairly successful, this generative hashing model requires a two-stage training. NASH @cite_11 proposed to substitute the Gaussian prior in VDSH with a Bernoulli prior to tackle this problem, by using a straight-through estimator @cite_3 to estimate the gradient of neural network involving the binary variables. This model can be trained in an end-to-end manner. Our models differ from VDSH and NASH in that mixture priors are employed to yield better hashing codes, whereas only the simplest priors are used in both VDSH and NASH.
{ "abstract": [ "As the amount of textual data has been rapidly increasing over the past decade, efficient similarity search methods have become a crucial component of large-scale information retrieval systems. A popular strategy is to represent original data samples by compact binary codes through hashing. A spectrum of machine learning methods have been utilized, but they often lack expressiveness and flexibility in modeling to learn effective representations. The recent advances of deep learning in a wide range of applications has demonstrated its capability to learn robust and powerful feature representations for complex data. Especially, deep generative models naturally combine the expressiveness of probabilistic generative models with the high capacity of deep neural networks, which is very suitable for text modeling. However, little work has leveraged the recent progress in deep learning for text hashing. In this paper, we propose a series of novel deep document generative models for text hashing. The first proposed model is unsupervised while the second one is supervised by utilizing document labels tags for hashing. The third model further considers document-specific factors that affect the generation of words. The probabilistic generative formulation of the proposed models provides a principled framework for model extension, uncertainty estimation, simulation, and interpretability. Based on variational inference and reparameterization, the proposed models can be interpreted as encoder-decoder deep neural networks and thus they are capable of learning complex nonlinear distributed representations of the original documents. We conduct a comprehensive set of experiments on four public testbeds. The experimental results have demonstrated the effectiveness of the proposed supervised learning models for text hashing.", "Stochastic neurons and hard non-linearities can be useful for a number of reasons in deep learning models, but in many cases they pose a challenging problem: how to estimate the gradient of a loss function with respect to the input of such stochastic or non-smooth neurons? I.e., can we \"back-propagate\" through these stochastic neurons? We examine this question, existing approaches, and compare four families of solutions, applicable in different settings. One of them is the minimum variance unbiased gradient estimator for stochatic binary neurons (a special case of the REINFORCE algorithm). A second approach, introduced here, decomposes the operation of a binary stochastic neuron into a stochastic binary part and a smooth differentiable part, which approximates the expected effect of the pure stochatic binary neuron to first order. A third approach involves the injection of additive or multiplicative noise in a computational graph that is otherwise differentiable. A fourth approach heuristically copies the gradient with respect to the stochastic output directly as an estimator of the gradient with respect to the sigmoid argument (we call this the straight-through estimator). To explore a context where these estimators are useful, we consider a small-scale version of conditional computation , where sparse stochastic units form a distributed representation of gaters that can turn off in combinatorially many ways large chunks of the computation performed in the rest of the neural network. In this case, it is important that the gating units produce an actual 0 most of the time. The resulting sparsity can be potentially be exploited to greatly reduce the computational cost of large deep networks for which conditional computation would be useful.", "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly back-propagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios." ], "cite_N": [ "@cite_28", "@cite_3", "@cite_11" ], "mid": [ "2740797857", "2242818861", "2798769241" ] }
Document Hashing with Mixture-Prior Generative Models
Similarity search aims to find items that look most similar to the query one from a huge amount of data , and are found in extensive applications like plagiarism analysis (Stein et al., 2007), collaborative filtering (Koren, 2008;, content-based multimedia retrieval (Lew et al., 2006), web services (Dong et al., 2004) etc. Semantic hashing is an effective way to accelerate the searching process by representing every document with a compact binary code. In this way, one only needs to evaluate the * Corresponding author. hamming distance between binary codes, which is much cheaper than the Euclidean distance calculation in the original feature space. Existing hashing methods can be roughly divided into data-independent and data-dependent categories. Data-independent methods employ random projections to construct hash functions without any consideration on data characteristics, like the locality sensitive hashing (LSH) algorithm (Datar et al., 2004). On the contrary, data dependent hashing seeks to learn a hash function from the given training data in a supervised or an unsupervised way. In the supervised case, a deterministic function which maps the data to a binary representation is trained by using the provided supervised information (e.g. labels) (Liu et al., 2012;Shen et al., 2015;. However, the supervised information is often very difficult to obtain or is not available at all. Unsupervised hashing seeks to obtain binary representations by leveraging the inherent structure information in data, such as the spectral hashing (Weiss et al., 2009), graph hashing (Liu et al., 2011), iterative quantization (Gong et al., 2013), self-taught hashing (Zhang et al., 2010) etc. Generative models are often considered as the most natural way for unsupervised representation learning (Miao et al., 2016;Bowman et al., 2015;Yang et al., 2017). Many efforts have been devoted to hashing by using generative models. In (Chaidaroon and Fang, 2017), variational deep semantic hashing (VDSH) is proposed to solve the semantic hashing problem by using the variational autoencoder (VAE) (Kingma and Welling, 2013). However, this model requires a two-stage training since a separate step is needed to cast the continuous representations in VAE into binary codes. Under the two-stage training strategy, the model is more prone to get stuck at poor performance (Xu et al., 2015;Zhang et al., 2010;Wang et al., 2013). To address the issue, the neural architecture for generative semantic hashing (NASH) in proposed to use a Bernoulli prior to replace the Gaussian prior in VDSH, and further use the straight-through (ST) method (Bengio et al., 2013) to estimate the gradients of functions involving binary variables. It is shown that the endto-end training brings a remarkable performance improvement over the two-stage training method in VDSH. Despite of superior performances, only the simplest priors are used in these models, i.e. Gaussian in VDSH and Bernoulli in NASH. However, it is widely known that priors play an important role on the performance of generative models (Goyal et al., 2017;Chen et al., 2016;Jiang et al., 2016). Motivated by this observation, in this paper, we propose to produce high-quality hashing codes by imposing appropriate mixture priors on generative models. Specifically, we first propose to model documents by a VAE with a Gaussian mixture prior. However, similar to the VDSH, the proposed method also requires a separate stage to cast the continuous representation into binary form, making it suffer from the same pains of twostage training. Then we further propose to use a Bernoulli mixture as the prior, in hopes to yield binary representations directly. An end-to-end method is further developed to train the model, by resorting to the straight-through gradient estimator for neural networks involving binary random variables. Extensive experiments are conducted on benchmark datasets, which show substantial gains of the proposed mixture-prior methods over existing ones, especially the method with a Bernoulli mixture prior. Semantic Hashing by Imposing Mixture Priors In this section, we investigate how to obtain similarity-preserved hashing codes by imposing different mixture priors on variational encoder. Preliminaries on Generative Semantic Hashing Let x ∈ Z |V | + denote the bag-of-words representation of a document and x i ∈ {0, 1} |V | denote the one-hot vector representation of the i-th word of the document, where |V | denotes the vocabulary size. VDSH in (Chaidaroon and Fang, 2017) proposed to model a document D, which is de-fined by a sequence of one-hot word representa- tions {x i } |D| i=1 , with the joint PDF p(D, z) = p θ (D|z)p(z),(1) where the prior p(z) is the standard Gaussian distribution N (0, I); the likelihood has the factorized form p θ (D|z) = |D| i=1 p θ (x i |z), and p θ (x i |z) = exp(z T Ex i + b i ) |V | j=1 exp(z T Ex j + b j ) ; (2) E ∈ R m×|V | is a parameter matrix which connects latent representation z to one-hot representation x i of the i-th word, with m being the dimension of z; b i is the bias term and θ = {E, b 1 , ..., b |V | }. It is known that generative models with better modeling capability often imply that the obtained latent representations are also more informative. To increase the modeling ability of (1), we may resort to more complex likelihood p θ (D|z), such as using deep neural networks to relate the latent z to the observation x i , instead of the simple softmax function in (2). However, as indicated in , employing expressive nonlinear decoders likely destroy the distance-keeping property, which is essential to yield good hashing codes. In this paper, instead of employing a more complex decoder p θ (D|z), more expressive priors are leveraged to address this issue. Semantic Hashing by Imposing Gaussian Mixture Priors To begin with, we first replace the standard Gaussian prior p(z) = N (0, I) in (1) by the following Gaussian mixture prior p(z) = K k=1 π k · N µ k , diag σ 2 k ,(3) where K is the number of mixture components; π k is the probability of choosing the k-th component and K k π k = 1; µ k ∈ R m and σ 2 k ∈ R m + are the mean and variance vectors of the Gaussian distribution of the k-th component; and diag(·) means diagonalizing the vector. For any sample z ∼ p(z), it can be equivalently generated by a two-stage procedure: 1) choosing a component c ∈ {1, 2, · · · , K} according to the categorical distribution Cat(π) with π = [π 1 , π 2 , · · · , π K ]; 2) drawing a sample from the z d i a g m s  Figure 1: The architectures of the GMSH and BMSH. The data generative process of GMSH is done as follows: (1) Pick a component c ∈ {1, 2, ..., K} from Cat(π) with π = [π 1 , π 2 , ..., π K ]; (2) Draw a sample z from the picked Gaussian distribution N µ c , diag(σ 2 c ) ; (3) Use g θ (z) to decode the sample z into an observablex. The process of generating data in BMSH can be described as follows: (1) Choose a component c from Cat(π); (2) Sample a latent vector from the chosen distribution Bernoulli(γ c ); (3) Inject data-dependent noise into z, and draw z from N (z, diag(σ 2 c )); (4) Then use decoder g θ (z ) to reconstructx. distribution N µ c , diag σ 2 c . Thus, the document D is modelled as p(D, z, c) = p θ (D|z)p(z|c)p(c),(4) where p(z|c) = N µ c , diag σ 2 c , p(c) = Cat(π) and p θ (D|z) is defined the same as (2). To train the model, we seek to optimize the lower bound of the log-likelihood L = E q φ (z,c|x) log p θ (D|z)p(z|c)p(c) q φ (z, c|x) ,(5) where q φ (z, c|x) is the approximate posterior distribution of p(z, c|x) parameterized by φ; here x could be any representation of the documents, like the bag-of-words, TFIDF etc. For the sake of tractability, q φ (z, c|x) is further assumed to maintain a factorized form, i.e., q φ (z, c|x) = q φ (z|x)q φ (c|x). Substituting it into the lower bound gives L =E q φ (z|x) [log p θ (D|z)] − KL (q φ (c|x)||p(c)) − E q φ (c|x) [KL (q φ (z|x)||p(z|c))] .(6) For simplicity, we assume that q φ (z|x) and q φ (c|x) take the forms of Gaussian and categorical distributions, respectively, and the distribution parameters are defined as the outputs of neural networks. The entire model, including the generative and inference arms, is illustrated in Figure 1(a). Using the properties of Gaussian and categorical distributions, the last two terms in (6) can be expressed in a closed form. Combining with the reparameterization trick in stochastic gradient variational bayes (SGVB) estimator (Kingma and Welling, 2013), the lower bound L can be optimized w.r.t. model parameters {θ, π, µ k , σ k , φ} by error backpropagation and SGD algorithms directly. Given a document x, its hashing code can be obtained through two steps: 1) mapping x to its latent representation by z = µ φ (x), where the µ φ (x) is the encoder mean µ φ (·); 2) thresholding z into binary form. As suggested in (Wang et al., 2013;Chaidaroon et al., 2018;Chaidaroon and Fang, 2017) that when hashing a batch of documents, we can use the median value of the elements in z as the critical value, and threshold each element of z into 0 and 1 by comparing it to this critical value. For presentation conveniences, the proposed semantic hashing model with a Gaussian mixture priors is referred as GMSH. Semantic Hashing by Imposing Bernoulli Mixture Priors To avoid the separate casting step used in GMSH, inspired by NASH , we further propose a Semantic Hashing model with a Bernoulli Mixture prior (BMSH). Specifically, we replace the Gaussian mixture prior in GMSH with the following Bernoulli mixture prior p(z) = K k=1 π k · Bernoulli (γ k ) ,(7) where γ k ∈ [0, 1] m represents the probabilities of z being 1. Effectively, the Bernoulli mixture prior, in addition to generating discrete samples, plays a similar role as Gaussian mixture prior, which make the samples drawn from different components have different patterns. The samples from the Bernoulli mixture can be generated by first choosing a component c ∈ {1, 2, · · · , K} from Cat(π) and then drawing a sample from the chosen distribution Bernoulli(γ c ). The entire model can be described as p(D, z, c) = p θ (D|z)p(z|c)p(c), where p θ (D|z) is defined the same as (2), and p(c) = Cat(π) and p(z|c) = Bernoulli(γ c ). Similar to GMSH, the model can be trained by maximizing the variational lower bound, which maintains the same form as (6). Different from GMSH, in which q φ (z|x) and p(z|c) are both in a Gaussian form, here p(z|c) is a Bernoulli distribution by definition, and thus q φ (z|x) is assumed to be the Bernoulli form as well, with the probability of the i-th element z i taking 1 defined as q φ (z i = 1|x) σ g i φ (x)(8) for i = 1, 2, · · · , m. Here g i φ (·) indicates the ith output of a neural network parameterized by φ. Similarly, we also define the posterior regarding which component to choose as q φ (c = k|x) = exp h k φ (x) K i=1 exp h i φ (x) ,(9) where h k φ (x) is the k-th output of a neural network parameterized by φ. With denotation α i = q φ (z i = 1|x) and β k = q φ (c = k|x), the last two terms in (6) can be expressed in close-form as KL (q φ (c|x)||p(c)) = K c=1 β c log β c π , E q φ (c|x) [KL (q φ (z|x)||p(z|c))] = K c=1 β c m i=1 α i log α i γ i c +(1− α i ) log 1− α i 1− γ i c , where γ i c denotes the i-th element of γ c . Due to the Bernoulli assumption for the posterior q φ (z|x), the commonly used reparameterization trick for Gaussian distribution cannot be used to directly estimate the first term E q φ (z|x) [log p θ (D|z)] in (6). Fortunately, inspired by the straight-through gradient estimator in (Bengio et al., 2013), we can parameterize the i-th element of binary sample z from q φ (z|x) as z i = 0.5 × sign σ(g i φ (x)) − ξ i + 1 ,(10) where sign(·) the is the sign function, which is equal to 1 for nonnegative inputs and -1 otherwise; and ξ i ∼ Uniform(0, 1) is a uniformly random sample between 0 and 1. The reparameterization method used above can guarantee generating binary samples. However, backpropagation cannot be used to optimize the lower bound L since the gradient of sign(·) w.r.t. its input is zero almost everywhere. To address this problem, the straight-through(ST) estimator (Bengio et al., 2013) is employed to estimate the gradient for the binary random variables, where the derivative of z i w.r.t φ is simply approximated by 0.5 × ∂σ(g i φ (x)) ∂φ . Thus, the gradients can then be backpropagated through discrete variables. Similar to NASH , data-dependent noises are also injected into the latent variables when reconstructing the document x so as to obtain more robust binary representations. The entire model of BMSH, including generative and inference parts, is illustrated in Figure 1(b). To understand how the mixture-prior model works differently from the simple prior model, we examine the main difference term E q φ (c|x) [KL (q φ (z|x)||p(z|c))] in (6), where q φ (c|x) is the approximate posterior probability that indicates the document x is generated by the c-th component distribution with c ∈ {1, 2, · · · , K}. In the mixture-prior model, the approximate posterior q φ (z|x) is compared to all mixture components p(z|c) = N µ c , diag(σ 2 c ) . The term E q φ (c|x) [KL (q φ (z|x)||p(z|c))] can be understood as the average of all these KLdivergences weighted by the probabilities q φ (c|x). Thus, comparing to the simple-prior model, the mixture-prior model is endowed with more flexibilities, allowing the documents to be regularized by different mixture components according to their context. Extensions to Supervised Hashing When label information is available, it can be leveraged to yield more effective hashing codes since labels provide extra information about the similarities of documents. Specifically, a mapping from the latent representation z to the cor-responding label y is learned for each document. The mapping encourages latent representations of documents with the same label to be close in the latent space, while those with different labels to be distant. A classifier built from a two-layer MLP is employed to parameterize this mapping, with its cross-entropy loss denoted by L dis (z, y). Taking the supervised objective into account, the total loss is defined as L total = −L + αL dis (z, y),(11) where L is the lower bound arising in GMSH or BMSH model; α controls the relative weight of the two losses. By examining the total loss L total , it can be seen that minimizing the loss encourages the model to learn a representation z that accounts for not only the unsupervised content similarities of documents, but also the supervised similarities from the extra label information. Performance Evaluation of Unsupervised Semantic Hashing Table 1 shows the performance of the proposed and baseline models on three datasets under the unsupervised setting, with the number of hashing bits ranging from 16 to 128. From the experimental results, it can be seen that GMSH outperforms previous models under all considered scenarios on both TMC and Reuters. It also achieves better performance on 20Newsgroups when the length of hashing codes is large, e.g. 64 or 128. Comparing to VDSH using the simple Gaussian prior, the proposed GMSH using a Gaussian mixture prior exhibits better retrieval performance overall. This strongly demonstrates the benefits of using mixture priors on the task of semantic hashing. One possible explanation is that the mixture prior enables the documents from different categories to be regularized by different distributions, guiding the model to learn more distinguishable representations for documents from different categories. It can be further observed that among all methods, BMSH achieves the best performance under different datasets and hashing codes length consistently. This may be attributed to the imposed Bernoulli mixture prior, which offers both the ad- vantages of producing more distinguishable codes with a mixture prior and end-to-end training enabled by a Bernoulli prior. BMSH integrates the merits of NASH and GMSH, and thus is more suitable for the hashing task. Figure 2 shows how retrieval precisions vary with the number of hashing bits on the three datasets. It can be observed that as the number increases from 32 to 128, the retrieval precisions of most previous models tend to decrease. This phenomenon is especially obvious for VDSH, in which the precisions on all three datasets drop by a significant margin. This interesting phenomenon has been reported in previous works Chaidaroon and Fang, 2017;Wang et al., 2013;Liu et al., 2012), and the reason could be overfitting since the model with long hashing codes is more likely to overfitting (Chaidaroon and Fang, 2017;. However, it can be seen that our model is more robust to the number of hashing bits. When the number is increased to 64 or 128, the performance of our models is kept almost unchanged. This may be also attributed to the mixture priors imposed in our models, which can regularize the models more effectively. Performance Evaluation of Supervised Semantic Hashing We evaluate the performance of supervised hashing in this section. Table 2 shows the performances of different supervised hashing models on three datasets under different lengths of hashing codes. We observe that all of the VAE-based generative hashing models (i.e VDSH, NASH, GMSH and BMSH) exhibit better performance, demonstrating the effectiveness of generative models on the task of semantic hashing. It can be also seen that BMSH-S achieves the best performance, suggesting that the advantages of Bernoulli mixture priors can also be extended to the supervised scenarios. To gain a better understanding about the relative performance gain of the four proposed models, the retrieval precisions of GMSH, BMSH, GMSH-S and BMSH-S using 32-bit hashing codes on the three datasets are plotted together in Figure 4. It can be obviously seen that GMSH-S and BMSH-S outperform GMSH and BMSH by a substantial margin, respectively. This suggests that the proposed generative hashing models can also leverage the label information to improve the hashing codes' quality. Impacts of the Component Number To investigate the impacts of component number, experiments are conducted for GMSH and BMSH under different values of K. For demonstration convenience, the length of hashing codes is fixed to 32. Table 3 shows the precisions of top 100 retrieved documents when the number of components K is set to different values. We can see that the retrieval precisions of the proposed models, especially the BMSH, are quite robust to this parameter. For BMSH, the difference between the best and worst precisions on the three datasets are 0.0123, 0.0052 and 0.0134, respectively, which are small comparing to the gains that BMSH has achieved. One exception is the performance of GMSH on 20Newsgroups dataset. However, as seen from Table 3, as long as the number K is not too small, the performance loss is still acceptable. It is worth noting that the worst performance of GMSH on 20Newsgroups is 0.4708, which is still better than VDSH's 0.4327 as in Table 1. For the BMSH model, the performance is stable across all the considered datasets and K values. Visualization of Learned Embeddings To understand the performance gains of the proposed models better, we visualize the learned representations of VDSH-S, GMSH-S and BMSH-S on 20Newsgroups dataset. UMAP (McInnes et al., 2018) is used to project the 32-dimensional latent representations into a 2-dimensional space, as shown in Figure 3. Each data point in the figure denotes a document, with each color representing one category. The number shown with the color is the ground truth category ID. It can be observed from Figure 3 (a) and (b) that more embeddings are clustered correctly when the Gaussian mixture prior is used. This confirms the advantages of using mixture priors in the task of hashing. Furthermore, it is observed that the latent embeddings learned by BMSH-S can be clustered almost perfectly. In contrast, many embeddings are found to be clustered incorrectly for the other two models. This observation is consistent with the conjecture that mixture prior and end-to-end training are both useful for semantic hashing. Conclusions In this paper, deep generative models with mixture priors were proposed for the tasks of semantic hashing. We first proposed to use a Gaussian mixture prior, instead of the standard Gaussian prior in VAE, to learn the representations of documents. A separate step was then used to cast the continuous latent representations into binary hashing codes. To avoid the requirement of a separate casting step, we further proposed to use the Bernoulli mixture prior, which offers the advantages of both mixture prior and the end-to-end training. Comparing to strong baselines on three public datasets, the experimental results indicate that the proposed methods using mixture priors outperform existing models by a substantial margin. Particularly, the semantic hashing model with Bernoulli mixture prior (BMSH) achieves state-of-the-art results on all the three datasets considered in this paper.
3,642
1908.11314
2970733215
Blind image denoising is an important yet very challenging problem in computer vision due to the complicated acquisition process of real images. In this work we propose a new variational inference method, which integrates both noise estimation and image denoising into a unique Bayesian framework, for blind image denoising. Specifically, an approximate posterior, parameterized by deep neural networks, is presented by taking the intrinsic clean image and noise variances as latent variables conditioned on the input noisy image. This posterior provides explicit parametric forms for all its involved hyper-parameters, and thus can be easily implemented for blind image denoising with automatic noise estimation for the test noisy image. On one hand, as other data-driven deep learning methods, our method, namely variational denoising network (VDN), can perform denoising efficiently due to its explicit form of posterior expression. On the other hand, VDN inherits the advantages of traditional model-driven approaches, especially the good generalization capability of generative models. VDN has good interpretability and can be flexibly utilized to estimate and remove complicated non-i.i.d. noise collected in real scenarios. Comprehensive experiments are performed to substantiate the superiority of our method in blind image denoising.
Most classical image denoising methods belong to this category, through designing a MAP model with a fidelity loss term and a regularization one delivering the pre-known image prior. Along this line, total variation denoising @cite_30 , anisotropic diffusion @cite_40 and wavelet coring @cite_25 use the statistical regularities of images to remove the image noise. Later, the nonlocal similarity prior, meaning many small patches in a non-local image area possess similar configurations, was widely used in image denoising. Typical ones include CBM3D @cite_13 and non-local means @cite_10 . Some dictionary learning methods @cite_32 @cite_6 @cite_9 and Field-of-Experts (FoE) @cite_23 , also revealing certain prior knowledge of image patches, had also been attempted for the task. Several other approaches focusing on the fidelity term, which are mainly determined by the noise assumption on data. E.g., Mulitscale @cite_18 assumed the noise of each patch and its similar patches in the same image to be correlated Gaussian distribution, and LR-MoG @cite_19 , DP-GMM @cite_21 and DDPT @cite_8 fitted the image noise by using Mixture of Gaussian (MoG) as an approximator for noises.
{ "abstract": [ "A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t--- 0o the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.", "", "Arguably several thousands papers are dedicated to image denoising. Most papers assume a fixed noise model, mainly white Gaussian or Poissonian. This assumption is only valid for raw images. Yet, in most images handled by the public and even by scientists, the noise model is imperfectly known or unknown. End users only dispose the result of a complex image processing chain effectuated by uncontrolled hardware and software (and sometimes by chemical means). For such images, recent progress in noise estimation permits to estimate from a single image a noise model, which is simultaneously signal and frequency dependent. We propose here a multiscale denoising algorithm adapted to this broad noise model. This leads to a blind denoising algorithm which we demonstrate on real JPEG images and on scans of old photographs for which the formation model is unknown. The consistency of this algorithm is also verified on simulated distorted images. This algorithm is finally compared with the unique state of the art previous blind denoising method.", "Most existing image denoising approaches assumed the noise to be homogeneous white Gaussian distributed with known intensity. However, in real noisy images, the noise models are usually unknown beforehand and can be much more complex. This paper addresses this problem and proposes a novel blind image denoising algorithm to recover the clean image from noisy one with the unknown noise model. To model the empirical noise of an image, our method introduces the mixture of Gaussian distribution, which is flexible enough to approximate different continuous distributions. The problem of blind image denoising is reformulated as a learning problem. The procedure is to first build a two-layer structural model for noisy patches and consider the clean ones as latent variable. To control the complexity of the noisy patch model, this work proposes a novel Bayesian nonparametric prior called “Dependent Dirichlet Process Tree” to build the model. Then, this study derives a variational inference algorithm to estimate model parameters and recover clean patches. We apply our method on synthesis and real noisy images with different noise models. Comparing with previous approaches, ours achieves better performance. The experimental results indicate the efficiency of the proposed algorithm to cope with practical image denoising tasks.", "Most of existing image denoising methods assume the corrupted noise to be additive white Gaussian noise (AWGN). However, the realistic noise in real-world noisy images is much more complex than AWGN, and is hard to be modeled by simple analytical distributions. As a result, many state-of-the-art denoising methods in literature become much less effective when applied to real-world noisy images captured by CCD or CMOS cameras. In this paper, we develop a trilateral weighted sparse coding (TWSC) scheme for robust real-world image denoising. Specifically, we introduce three weight matrices into the data and regularization terms of the sparse coding framework to characterize the statistics of realistic noise and image priors. TWSC can be reformulated as a linear equality-constrained problem and can be solved by the alternating direction method of multipliers. The existence and uniqueness of the solution and convergence of the proposed algorithm are analyzed. Extensive experiments demonstrate that the proposed TWSC scheme outperforms state-of-the-art denoising methods on removing realistic noise.", "", "As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.", "Simultaneous sparse coding (SSC) or nonlocal image representation has shown great potential in various low-level vision tasks, leading to several state-of-the-art image restoration techniques, including BM3D and LSSC. However, it still lacks a physically plausible explanation about why SSC is a better model than conventional sparse coding for the class of natural images. Meanwhile, the problem of sparsity optimization, especially when tangled with dictionary learning, is computationally difficult to solve. In this paper, we take a low-rank approach toward SSC and provide a conceptually simple interpretation from a bilateral variance estimation perspective, namely that singular-value decomposition of similar packed patches can be viewed as pooling both local and nonlocal information for estimating signal variances. Such perspective inspires us to develop a new class of image restoration algorithms called spatially adaptive iterative singular-value thresholding (SAIST). For noise data, SAIST generalizes the celebrated BayesShrink from local to nonlocal models; for incomplete data, SAIST extends previous deterministic annealing-based solution to sparsity optimization through incorporating the idea of dictionary learning. In addition to conceptual simplicity and computational efficiency, SAIST has achieved highly competent (often better) objective performance compared to several state-of-the-art methods in image denoising and completion experiments. Our subjective quality results compare favorably with those obtained by existing techniques, especially at high noise levels and with a large amount of missing data.", "Traditional image denoising algorithms always assume the noise to be homogeneous white Gaussian distributed. However, the noise on real images can be much more complex empirically. This paper addresses this problem and proposes a novel blind image denoising algorithm which can cope with real-world noisy images even when the noise model is not provided. It is realized by modeling image noise with mixture of Gaussian distribution (MoG) which can approximate large varieties of continuous distributions. As the number of components for MoG is unknown practically, this work adopts Bayesian nonparametric technique and proposes a novel Low-rank MoG filter (LR-MoG) to recover clean signals (patches) from noisy ones contaminated by MoG noise. Based on LR-MoG, a novel blind image denoising approach is developed. To test the proposed method, this study conducts extensive experiments on synthesis and real images. Our method achieves the state-of the-art performance consistently.", "A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image. >", "We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach provides a practical method for learning high-order Markov random field (MRF) models with potential functions that extend over large pixel neighborhoods. These clique potentials are modeled using the Product-of-Experts framework that uses non-linear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field-of-Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with specialized techniques.", "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.", "The classical solution to the noise removal problem is the Wiener filter, which utilizes the second-order statistics of the Fourier decomposition. Subband decompositions of natural images have significantly non-Gaussian higher-order point statistics; these statistics capture image properties that elude Fourier-based techniques. We develop a Bayesian estimator that is a natural extension of the Wiener solution, and that exploits these higher-order statistics. The resulting nonlinear estimator performs a \"coring\" operation. We provide a simple model for the subband statistics, and use it to develop a semi-blind noise removal algorithm based on a steerable wavelet pyramid." ], "cite_N": [ "@cite_30", "@cite_13", "@cite_18", "@cite_8", "@cite_9", "@cite_21", "@cite_32", "@cite_6", "@cite_19", "@cite_40", "@cite_23", "@cite_10", "@cite_25" ], "mid": [ "2103559027", "", "1504409388", "2963507294", "2820727372", "2896795507", "2048695508", "2014311222", "2474817805", "2150134853", "2130184048", "2097073572", "2149925139" ] }
Variational Denoising Network: Toward Blind Noise Modeling and Removal
Image denoising is an important research topic in computer vision, aiming at recovering the underlying clean image from an observed noisy one. The noise contained in a real noisy image is generally accumulated from multiple different sources, e.g., capturing instruments, data transmission media, image quantization, etc. [39]. Such complicated generation process makes it fairly difficult to access the noise information accurately and recover the underlying clean image from the noisy one. This constitutes the main aim of blind image denoising. There are two main categories of image denoising methods. Most classical methods belong to the first category, mainly focusing on constructing a rational maximum a posteriori (MAP) model, involving the fidelity (loss) and regularization terms, from a Bayesian perspective [6]. An understanding for data generation mechanism is required for designing a rational MAP objective, especially better image priors like sparsity [3], low-rankness [16,48,41], and non-local similarity [9,27]. These methods are superior mainly in their interpretability naturally led by the Bayesian framework. They, however, still exist critical limitations due to their assumptions on both image prior and noise (generally i.i.d. Gaussian), possibly deviating from real spatially variant (i.e.,non-i.i.d.) noise, and their relatively low implementation speed since the algorithm needs to be re-implemented for any new coming image. Recently, deep learning approaches represent a new trend along this research line. The main idea is to firstly collect large amount of noisy-clean image pairs and then train a deep neural network denoiser on these training data in an end-to-end learning manner. This approach is especially superior in its effective accumulation of knowledge from large datasets and fast denoising speed for test images. They, however, are easy to overfit to the training data with certain noisy types, and still could not be generalized well on test images with unknown but complicated noises. Thus, blind image denoising especially for real images is still a challenging task, since the real noise distribution is difficult to be pre-known (for model-driven MAP approaches) and hard to be comprehensively simulated by training data (for data-driven deep learning approaches). Against this issue, this paper proposes a new variational inference method, aiming at directly inferring both the underlying clean image and the noise distribution from an observed noisy image in a unique Bayesian framework. Specifically, an approximate posterior is presented by taking the intrinsic clean image and noise variances as latent variables conditioned on the input noisy image. This posterior provides explicit parametric forms for all its involved hyper-parameters, and thus can be efficiently implemented for blind image denoising with automatic noise estimation for test noisy images. In summary, this paper mainly makes following contributions: 1) The proposed method is capable of simultaneously implementing both noise estimation and blind image denoising tasks in a unique Bayesian framework. The noise distribution is modeled as a general non-i.i.d. configurations with spatial relevance across the image, which evidently better complies with the heterogeneous real noise beyond the conventional i.i.d. noise assumption. 2) Succeeded from the fine generalization capability of the generative model, the proposed method is verified to be able to effectively estimate and remove complicated non-i.i.d. noises in test images even though such noise types have never appeared in training data, as clearly shown in Fig. 3. 3) The proposed method is a generative approach outputted a complete distribution revealing how the noisy image is generated. This not only makes the result with more comprehensive interpretability beyond traditional methods purely aiming at obtaining a clean image, but also naturally leads to a learnable likelihood (fidelity) term according to the data-self. 4) The most commonly utilized deep learning paradigm, i.e., taking MSE as loss function and training on large noisy-clean image pairs, can be understood as a degenerated form of the proposed generative approach. Their overfitting issue can then be easily explained under this variational inference perspective: these methods intrinsically put dominant emphasis on fitting the priors of the latent clean image, while almost neglects the effect of noise variations. This makes them incline to overfit noise bias on training data and sensitive to the distinct noises in test noisy images. The paper is organized as follows: Section 2 introduces related work. Sections 3 presents the proposed full Bayesion model, the deep variational inference algorithm, the network architecture and some discussions. Section 4 demonstrates experimental results and the paper is finally concluded. Variational Denoising Network for Blind Noise Modeling Given training set D = {y j , x j } n j=1 , where y j , x j denote the j th training pair of noisy and the expected clean images, n represents the number of training images, our aim is to construct a variational parametric approximation to the posterior of the latent variables, including the latent clean image and the noise variances, conditioned on the noisy image. Note that for the noisy image y, its training pair x is generally a simulated "clean" one obtained as the average of many noisy ones taken under similar camera conditions [4,1], and thus is always not the exact latent clean image z. This explicit parametric posterior can then be used to directly infer the clean image and noise distribution from any test noisy image. To this aim, we first need to formulate a rational full Bayesian model of the problem based on the knowledge delivered by the training image pairs. Constructing Full Bayesian Model Based on Training Data Denote y = [y 1 , · · · , y d ] T and x = [x 1 , · · · , x d ] T as any training pair in D, where d (width*height) is the size of a training image 1 . We can then construct the following model to express the generation process of the noisy image y: y i ∼ N (y i |z i , σ 2 i ), i = 1, 2, · · · , d,(1) where z ∈ R d is the latent clean image underlying y, N (·|µ, σ 2 ) denotes the Gaussian distribution with mean µ and variance σ 2 . Instead of assuming i.i.d. distribution for the noise as conventional [28,13,16,41], which largely deviates the spatial variant and signal-depend characteristics of the real noise [45,8], we models the noise as a non-i.i.d. and pixel-wise Gaussian distribution in Eq. (1). The simulated "clean" image x evidently provides a strong prior to the latent variable z. Accordingly we impose the following conjugate Gaussian prior on z: z i ∼ N (z i |x i , ε 2 0 ), i = 1, 2, · · · , d,(2) where ε 0 is a hyper-parameter and can be easily set as a small value. Besides, for σ 2 = {σ 2 1 , σ 2 2 , · · · , σ 2 d }, we also introduce a rational conjugate prior as follows: σ 2 i ∼ IG σ 2 i | p 2 2 − 1, p 2 ξ i 2 , i = 1, 2, · · · , d,(3) where IG(·|α, β) is the inverse Gamma distribution with parameter α and β, ξ = G (ŷ −x) 2 ; p represents the filtering output of the variance map (ŷ −x) 2 by a Gaussian filter with p × p window, andŷ,x ∈ R h×w are the matrix (image) forms of y, x ∈ R d , respectively. Note that the mode of above IG distribution is ξ i [6,42], which is a approximate evaluation of σ 2 i in p × p window. Combining Eqs. (1)-(3), a full Bayesian model for the problem can be obtained. The goal then turns to infer the posterior of latent variables z and σ 2 from noisy image y, i.e., p(z, σ 2 |y). Variational Form of Posterior We first construct a variational distribution q(z, σ 2 |y) to approximate the posterior p(z, σ 2 |y) led by Eqs. (1)-(3). Similar to the commonly used mean-field variation inference techniques, we assume conditional independence between variables z and σ 2 , i.e., q(z, σ 2 |y) = q(z|y)q(σ 2 |y). Based on the conjugate priors in Eqs. (2) and (3), it is natural to formulate variational posterior forms of z and σ 2 as follows: where µ i (y; W D ) and m 2 i (y; W D ) are designed as the prediction functions for getting posterior parameters of latent variable z directly from y. The function is represented as a network, called denoising network or D-Net, with parameters W D . Similarly, α i (y; W S ) and β i (y; W S ) denote the prediction functions for evaluating posterior parameters of σ 2 from y, where W S represents the parameters of the network, called Sigma network or S-Net. The aforementioned is illustrated in Fig. 1. Our aim is then to optimize these network parameters W D and W S so as to get the explicit functions for predicting clean image z as well as noise knowledge σ 2 from any test noisy image y. A rational objective function with respect to W D and W S is thus necessary to train both the networks. q(z|y) = d i N (z i |µ i (y; W D ), m 2 i (y; W D )), q(σ 2 |y) = d i IG(σ 2 i |α i (y; W S ), β i (y; W S )), (5) ( ( | )|| ) ( ( 2 | )|| 2 ) ( , 2 ) [log , 2 ] ℒ( Note that the network parameters W D and W S are shared by posteriors calculated on all training data, and thus if we train them on the entire training set, the method is expected to induce the general statistical inference insight from noisy image to its underlying clean image and noise level. Variational Lower Bound of Marginal Data Likelihood For notation convenience, we simply write µ i (y; W D ), m 2 i (y; W D ), α i (y; W S ), β i (y; W S ) as µ i , m 2 i , α i , β i in the following calculations. For any noisy image y and its simulated "clean" image x in the training set, we can decompose its marginal likelihood as the following form [7]: log p(y) = L(z, σ 2 ; y) + D KL q(z, σ 2 |y)||p(z, σ 2 |y) ,(6) where L(z, σ 2 ; y) = E q(z,σ 2 |y) log p(y|z, σ 2 )p(z)p(σ 2 ) − log q(z, σ 2 |y) ,(7) Here E p(x) [f (x)] represents the exception of f (x) w.r.t. stochastic variable x with probability density function p(x). The second term of Eq. (6) is a KL divergence between the variational approximate posterior q(z, σ 2 |y) and the true posterior p(z, σ 2 |y) with non-negative value. Thus the first term L(z, σ 2 ; y) constitutes a variational lower bound on the logarithm of marginal likelihood p(y), i.e., log p(y) ≥ L(z, σ 2 ; y).(8) According to Eqs. (4), (5) and (7), the lower bound can then be rewritten as: L(z, σ 2 ; y) = E q(z,σ 2 |y) log p(y|z, σ 2 ) − D KL (q(z|y)||p(z)) − D KL q(σ 2 |y)||p(σ 2 ) . (9) It's pleased that all the three terms in Eq (9) can be integrated analytically as follows: E q(z,σ 2 |y) log p(y|z, σ 2 ) = d i=1 − 1 2 log 2π − 1 2 (log βi − ψ(αi)) − αi 2βi (yi − µi) 2 + m 2 i ,(10)DKL (q(z|y)||p(z)) = d i=1 (µi − xi) 2 2ε 2 0 + 1 2 m 2 i ε 2 0 − log m 2 i ε 2 0 − 1 ,(11)DKL q(σ 2 |y)||p(σ 2 ) = d i=1 αi − p 2 2 + 1 ψ(αi) + log Γ p 2 2 − 1 − log Γ(αi) + p 2 2 − 1 log βi − log p 2 ξi 2 + αi p 2 ξi 2βi − 1 ,(12) where ψ(·) denotes the digamma function. Calculation details are listed in supplementary material. We can then easily get the expected objective function (i.e., a negtive lower bound of the marginal likelihood on entire training set) for optimizing the network parameters of D-Net and S-Net as follows: min W D ,W S − n j=1 L(z j , σ 2 j ; y j ).(13) Network Learning As aforementioned, we use D-Net and S-Net together to infer the variational parameters µ, m 2 and α, β from the input noisy image y, respectively, as shown in Fig. 1. It is critical to consider how to calculate derivatives of this objective with respect to W D , W S involved in µ, m 2 , α and β to facilitate an easy use of stochastic gradient varitional inference. Fortunately, different from other related variational inference techniques like VAE [22], all three terms of Eqs. (10)- (12) in the lower bound Eq. (9) are differentiable and their derivatives can be calculated analytically without the need of any reparameterization trick, largely reducing the difficulty of network training. At the training stage of our method, the network parameters can be easily updated with backpropagation (BP) algorithm [15] through Eq. (13). The function of each term in this objective can be intuitively explained: the first term represents the likelihood of the observed noisy images in training set, and the last two terms control the discrepancy between the variational posterior and the corresponding prior. During the BP training process, the gradient information from the likelihood term of Eq. (10) is used for updating both the parameters of D-Net and S-Net simultaneously, implying that the inference for the latent clean image z and σ 2 is guided to be learned from each other. At the test stage, for any test noisy image, through feeding it into D-Net, the final denoising result can be directly obtained by µ. Additionally, through inputting the noisy image to the S-Net, the noise distribution knowledge (i.e., σ 2 ) is easily inferred. Specifically, the noise variance in each pixel can be directly obtained by using the mode of the inferred inverse Gamma distribution: σ 2 i = βi (αi+1) . Network Architecture The D-Net in Fig. 1 takes the noisy image y as input to infer the variational parameters µ and m 2 in q(z|y) of Eq. (5), and performs the denoising task in the proposed variational inference algorithm. In order to capture multi-scale information of the image, we use a U-Net [34] with depth 4 as the D-Net, which contains 4 encoder blocks ([Conv+ReLU]×2+Average pooling), 3 decoder blocks (Transpose Conv+[Conv+ReLU]×2) and symmetric skip connection under each scale. For parameter µ, the residual learning strategy is adopted as in [44], i.e., µ = y + f (y; W D ), where f (·; W D ) denotes the D-Net with parameters W D . As for the S-Net, which takes the noisy image y as input and outputs the predicted variational parameters α and β in q(σ 2 |y) of Eq (5), we use the DnCNN [44] architecture with five layers, and the feature channels of each layer is set as 64. It should be noted that our proposed method is a general framework, most of the commonly used network architectures [45,33,24,46] in image restoration can also be easily substituted. Some Discussions It can be seen that the proposed method succeeds advantages of both model-driven MAP and datadriven deep learning methods. On one hand, our method is a generative approach and possesses fine interpretability to the data generation mechanism; and on the other hand it conducts an explicit prediction function, facilitating efficient image denoising as well as noise estimation directly through an input noisy image. Furthermore, beyond current methods, our method can finely evaluate and remove non-i.i.d. noises embedded in images, and has a good generalization capability to images with complicated noises, as evaluated in our experiments. This complies with the main requirement of the blind image denoising task. If we set the hyper-parameter ε 2 0 in Eq.(2) as an extremely small value close to 0, it is easy to see that the objective of the proposed method is dominated by the second term of Eq. (10), which makes ; W D ) − x j || 2 . This provides a new understanding to explain why they incline to overfit noise bias in training data. The posterior inference process puts dominant emphasis on fitting priors imposed on the latent clean image, while almost neglects the effect of noise variations. This naturally leads to its sensitiveness to unseen complicated noises contained in test images. Very recently, both CBDNet [17] and FFDNet [45] are presented for the denoising task by feeding the noisy image integrated with the pre-estimated noise level into the deep network to make it better generalize to distinct noise types in training stage. Albeit more or less improving the generalization capability of network, such strategy is still too heuristic and is not easy to interpret how the input noise level intrinsically influence the final denoising result. Comparatively, our method is constructed in a sound Bayesian manner to estimate clean image and noise distribution together from the input noisy image, and its generalization can be easily explained from the perspective of generative model. Experimental Results We evaluate the performance of our method on synthetic and real datasets in this section. All experiments are evaluated in the sRGB space. We briefly denote our method as VDN in the following. The training and testing codes of our VDN is available at https://github.com/zsyOAOA/VDNet. Experimental Setting Network training and parameter setting: The weights of D-Net and S-Net in our variational algorithm were initialized according to [18]. In each epoch, we randomly crop N = 64 × 5000 patches with size 128 × 128 from the images for training. The Adam algorithm [21] is adopted to optimize the network parameters through minimizing the proposed negative lower bound objective. The initial learning rate is set as 2e-4 and linearly decayed in half every 10 epochs until to 1e-6. The window size p in Eq. (3) is set as 7. The hyper-parameter ε 2 0 is set as 5e-5 and 1e-6 in the following synthetic and real-world image denoising experiments, respectively. Comparison methods: Several state-of-the-art denoising methods are adopted for performance comparison, including CBM3D [11], WNNM [16], NCSR [14], MLP [10], DnCNN-B [44], Mem-Net [38], FFDNet [45], UDNet [24] and CBDNet [17]. Note that CBDNet is mainly designed for blind denoising task, and thus we only compared CBDNet on the real noise removal experiments. (a) (c) (b) (d) (e) (f ) Experiments on Synthetic Non-I.I.D. Gaussian Noise Cases Similar to [45], we collected a set of source images to train the network, including 432 images from BSD [5], 400 images from the validation set of ImageNet [12] and 4744 images from the Waterloo Exploration Database [26]. Three commonly used datasets in image restoration (Set5, LIVE1 and BSD68 in [20]) were adopted as test datasets to evaluate the performance of different methods. In order to evaluate the effectiveness and robustness of VDN under the non-i.i.d. noise configuration, we simulated the non-i.i.d. Gaussian noise as following, n = n 1 ⊙ M , n 1 ij ∼ N (0, 1),(14) where M is a spatially variant map with the same size as the source image. We totally generated four kinds of M s as shown in Fig. 2. The first ( Fig. 2 (a)) is used for generating noisy images of training data and the others (Fig. 2 (b)-(d)) generating three groups of testing data (denotes as Cases 1-3). Under this noise generation manner, the noises in training data and testing data are with evident difference, suitable to verify the robustness and generalization capability of competing methods. Comparson with the State-of-the-art: Table 1 lists the average PSNR results of all competing methods on three groups of testing data. From Table 1, it can be easily observed that: 1) The VDN outperforms other competing methods in all cases, indicating that VDN is able to handle such complicated non-i.i.d. noise; 2) VDN surpasses FFDNet about 0.25dB averagely even though FFDNet depends on the true noise level information instead of automatically inferring noise distribution as our method; 3) the discriminative methods MLP, DnCNN-B and UDNet seem to evidently overfit on training noise bias; 4) the classical model-driven method CBM3D performs more stably than WNNM and NCSR, possibly due to the latter's improper i.i.d. Gaussian noise assumption. Fig. 3 shows the denoising results of different competing methods on one typical image in testing set of Case 2, and more denoising results can be found in the supplementary material. Note that we only display the top four best results from all due to page limitation. It can be seen that the denoised images by CBM3D and DnCNN-B still contain obvious noise, and FFDNet over-smoothes the image and loses some edge information, while our proposed VDN removes most of the noise and preserves more details. Even though our VDN is designed based on the non-i.i.d. noise assumption and trained on the non-i.i.d. noise data, it also performs well on additive white Gaussian noise (AWGN) removal task. Table 2 lists the average PSNR results under three noise levels (σ = 15, 25, 50) of AWGN. It is easy to see that our method obtains the best or at least comparable performance with the state-of-the-art method FFDNet. Combining Table 1 and Table 2, it should be rational to say that our VDN is robust and able to handle a wide range of noise types, due to its better noise modeling manner. Noise Variance Prediction: The S-Net plays the role of noise modeling and is able to infer the noise distribution from the noisy image. To verify the fitting capability of S-Net, we provided the M Experiments on Real-World Noise In this part, we evaluate the performance of VDN on real blind denoising task, including two banchmark datasets: DND [31] and SIDD [1]. DND consists of 50 high-resolution images with realistic noise from 50 scenes taken by 4 consumer cameras. However, it does not provide any other additional noisy and clean image pairs to train the network. SIDD [1] is another real-world denoising benchmark, containing 30, 000 real noisy images captured by 5 cameras under 10 scenes. For each noisy image, it estimates one simulated "clean" image through some statistical methods [1]. About 80% (∼ 24, 000 pairs) of this dataset are provided for training purpose, and the rest as held for benchmark. And 320 image pairs selected from them are packaged together as a medium version of SIDD, called SIDD Medium Dataset 2 , for fast training of a denoiser. We employed this medium vesion dataset to train a real-world image denoiser, and test the performance on the two benchmarks. Table 3 lists PSNR results of different methods on SIDD benchmark 3 . Note that we only list the results of the competing methods that are available on the official benchmark website 2 . It is evident that VDN outperforms other methods. However, note that neither DnCNN-B nor CBDNet performs well, possibly because they were trained on the other datasets, whose noise type is different from SIDD. For fair comparison, we retrained DnCNN-B and CBDNet based on the SIDD dataset. The performance on the SIDD validation set is also listed in Table 3 Table 4 lists the performance of all competing methods on the DND benchmark 4 . From the table, it is easy to be seen that our proposed VDN surpasses all the competing methods. It is worth noting that CBDNet has the same optimized network with us, containing a S-Net designed for estimating the noise distribution and a D-Net for denoising. The superiority of VDN compared with CBDNet mainly benefits from the deep variational inference optimization. For easy visualization, on one typical denoising example, results of the best four competing methods are displayed in Fig. 4. Obviously, WNNM is ubable to remove the complex real noise, maybe because the low-rankness prior is insufficient to describe all the image information and the IID Gaussian noise assumption is in conflict with the real noise. With the powerful feature extraction ability of CNN, DnCNN and CBDNet obtain much better denoising results than WNNM, but still with a little noise. However, the denoising result of our proposed VDN has almost no noise and is very close to the groundtruth. In Fig. 5, we displayed the noise variance map predicted by S-Net on the two real benchmarks. The variance maps had been enlarged several times for easy visualization. It is easy to see that the predicted noise variance map relates to the image content, which is consistent with the well-known signal-depend property of real noise to some extent. Hyper-parameters Analysis The hyper-parameter ε 0 in Eq. (2) determines how much does the desired latent clean image z depend on the simulated groundtruth x. As discussed in Section 3.6, the negative variational lower bound degenerates into MSE loss when ε 0 is setted as an extremely small value close to 0. The performance of VDN under different ε 0 values on the SIDD validation dataset is listed in Table 5. For explicit comparison, we also directly trained the D-Net under MSE loss as baseline. From Table 5, we can see that: 1) when ε 0 is too large, the proposed VDN obtains relatively worse results since the prior constraint on z by simulated groundtruth x becomes weak; 2) with ε 0 decreasing, the performance of VDN tends to be similar with MSE loss as analysised in theory; 3) the results of VDN surpasses MSE loss about 0.3 dB PSNR when ε 2 0 = 1e-6, which verifies the importantance of noise modeling in our method. Therefore, we suggest that the ε 2 0 is set as 1e-5 or 1e-6 in the real-world denoising task. In Eq. (3), we introduced a conjugate inverse gamma distribution as prior for σ 2 . The mode of this inverse gamma distribution ξ i provides a rational approximate evaluation for σ 2 i , which is a local estimation in a p × p window centered at the i th pixel. We compared the performance of VDN under different p values on the SIDD validation dataset in Table 6. Empirically, VDN performs consistently well for the hyper-parameter p. Conclusion We have proposed a new variational inference algorithm, namely varitional denoising network (VDN), for blind image denoising. The main idea is to learn an approximate posterior to the true posterior with the latent variables (including clean image and noise variances) conditioned on the input noisy image. Using this variational posterior expression, both tasks of blind image denoising and noise estimation can be naturally attained in a unique Bayesian framework. The proposed VDN is a generative method, which can easily estimate the noise distribution from the input data. Comprehensive experiments have demonstrated the superiority of VDN to previous works on blind image denoising. Our method can also facilitate the study of other low-level vision tasks, such as super-resolution and deblurring. Specifically, the fidelity term in these tasks can be more faithfully set under the estimated non-i.i.d. noise distribution by VDN, instead of the traditional i.i.d. Gaussian noise assumption.
4,560
1908.11314
2970733215
Blind image denoising is an important yet very challenging problem in computer vision due to the complicated acquisition process of real images. In this work we propose a new variational inference method, which integrates both noise estimation and image denoising into a unique Bayesian framework, for blind image denoising. Specifically, an approximate posterior, parameterized by deep neural networks, is presented by taking the intrinsic clean image and noise variances as latent variables conditioned on the input noisy image. This posterior provides explicit parametric forms for all its involved hyper-parameters, and thus can be easily implemented for blind image denoising with automatic noise estimation for the test noisy image. On one hand, as other data-driven deep learning methods, our method, namely variational denoising network (VDN), can perform denoising efficiently due to its explicit form of posterior expression. On the other hand, VDN inherits the advantages of traditional model-driven approaches, especially the good generalization capability of generative models. VDN has good interpretability and can be flexibly utilized to estimate and remove complicated non-i.i.d. noise collected in real scenarios. Comprehensive experiments are performed to substantiate the superiority of our method in blind image denoising.
Instead of pre-setting image prior, deep learning methods directly learn a denoiser (formed as a deep neural network) from noisy to clean ones on a large collection of noisy-clean image pairs. Jain and Seung @cite_1 firstly adopted a five layer convolution neural network (CNN) for the task. Then some auto-encoder based methods @cite_22 @cite_34 were applied. Meantime, @cite_11 achieved the comparable performance with BM3D using plain multi-layer perceptron (MLP). @cite_27 further proposed the denoising convolution network (DnCNN) and achieved state-of-the-art performance on Gaussian denoising tasks. @cite_33 proposed a deep fully convolution encoding-decoding network with symmetric skip connection. In order to boost the flexibility against spatial variant noise, FFDNet @cite_26 was proposed by pre-evaluating the noise level and inputting it to the network together with the noisy image. @cite_16 and @cite_28 both attempted to simulate the generation process of the images in camera.
{ "abstract": [ "Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising. However, these methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels. They also lack flexibility to deal with spatially variant noise, limiting their applications in practical denoising. To address these issues, we present a fast and flexible denoising convolutional neural network, namely FFDNet, with a tunable noise level map as the input. The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance. In contrast to the existing discriminative denoisers, FFDNet enjoys several desirable properties, including: 1) the ability to handle a wide range of noise levels (i.e., [0, 75]) effectively with a single network; 2) the ability to remove spatially variant noise by specifying a non-uniform noise level map; and 3) faster speed than benchmark BM3D even on CPU without sacrificing denoising performance. Extensive experiments on synthetic and real noisy images are conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The results show that FFDNet is effective and efficient, making it highly attractive for practical denoising applications.", "We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.", "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and deconvolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises corruptions. Deconvolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and deconvolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, the skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to deconvolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than recent state-of-the-art methods.", "Machine learning techniques work best when the data used for training resembles the data used for evaluation. This holds true for learned single-image denoising algorithms, which are applied to real raw camera sensor readings but, due to practical constraints, are often trained on synthetic image data. Though it is understood that generalizing from synthetic to real data requires careful consideration of the noise properties of image sensors, the other aspects of a camera's image processing pipeline (gain, color correction, tone mapping, etc) are often overlooked, despite their significant effect on how raw measurements are transformed into finished images. To address this, we present a technique to \"unprocess\" images by inverting each step of an image processing pipeline, thereby allowing us to synthesize realistic raw sensor measurements from commonly available internet photos. We additionally model the relevant components of an image processing pipeline when evaluating our loss function, which allows training to be aware of all relevant photometric processing that will occur after denoising. By processing and unprocessing model outputs and training data in this way, we are able to train a simple convolutional neural network that has 14 -38 lower error rates and is 9x-18x faster than the previous state of the art on the Darmstadt Noise Dataset, and generalizes to sensors outside of that dataset as well.", "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.", "While deep convolutional neural networks (CNNs) have achieved impressive success in image denoising with additive white Gaussian noise (AWGN), their performance remains limited on real-world noisy photographs. The main reason is that their learned models are easy to overfit on the simplified AWGN model which deviates severely from the complicated real-world noise model. In order to improve the generalization ability of deep CNN denoisers, we suggest training a convolutional blind denoising network (CBDNet) with more realistic noise model and real-world noisy-clean image pairs. On the one hand, both signal-dependent noise and in-camera signal processing pipeline is considered to synthesize realistic noisy images. On the other hand, real-world noisy photographs and their nearly noise-free counterparts are also included to train our CBDNet. To further provide an interactive strategy to rectify denoising result conveniently, a noise estimation subnetwork with asymmetric learning to suppress under-estimation of noise level is embedded into CBDNet. Extensive experimental results on three datasets of real-world noisy photographs clearly demonstrate the superior performance of CBDNet over state-of-the-arts in terms of quantitative metrics and visual quality. The code has been made available at this https URL.", "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.", "Stacked sparse denoising autoencoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what it has seen during training. To address this limitation, we present the adaptive multi-column stacked sparse denoising autoencoder (AMC-SSDA), a novel technique of combining multiple SSDAs by (1) computing optimal column weights via solving a nonlinear optimization program and (2) training a separate network to predict the optimal weights. We eliminate the need to determine the type of noise, let alone its statistics, at test time and even show that the system can be robust to noise not seen in the training set. We show that state-of-the-art denoising performance can be achieved with a single system on a variety of different noise types. Additionally, we demonstrate the efficacy of AMC-SSDA as a preprocessing (denoising) algorithm by achieving strong classification performance on corrupted MNIST digits.", "Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well." ], "cite_N": [ "@cite_26", "@cite_22", "@cite_33", "@cite_28", "@cite_1", "@cite_16", "@cite_27", "@cite_34", "@cite_11" ], "mid": [ "2764207251", "2146337213", "2964046669", "2901996700", "2098477387", "2832157980", "2508457857", "2151503710", "2037642501" ] }
Variational Denoising Network: Toward Blind Noise Modeling and Removal
Image denoising is an important research topic in computer vision, aiming at recovering the underlying clean image from an observed noisy one. The noise contained in a real noisy image is generally accumulated from multiple different sources, e.g., capturing instruments, data transmission media, image quantization, etc. [39]. Such complicated generation process makes it fairly difficult to access the noise information accurately and recover the underlying clean image from the noisy one. This constitutes the main aim of blind image denoising. There are two main categories of image denoising methods. Most classical methods belong to the first category, mainly focusing on constructing a rational maximum a posteriori (MAP) model, involving the fidelity (loss) and regularization terms, from a Bayesian perspective [6]. An understanding for data generation mechanism is required for designing a rational MAP objective, especially better image priors like sparsity [3], low-rankness [16,48,41], and non-local similarity [9,27]. These methods are superior mainly in their interpretability naturally led by the Bayesian framework. They, however, still exist critical limitations due to their assumptions on both image prior and noise (generally i.i.d. Gaussian), possibly deviating from real spatially variant (i.e.,non-i.i.d.) noise, and their relatively low implementation speed since the algorithm needs to be re-implemented for any new coming image. Recently, deep learning approaches represent a new trend along this research line. The main idea is to firstly collect large amount of noisy-clean image pairs and then train a deep neural network denoiser on these training data in an end-to-end learning manner. This approach is especially superior in its effective accumulation of knowledge from large datasets and fast denoising speed for test images. They, however, are easy to overfit to the training data with certain noisy types, and still could not be generalized well on test images with unknown but complicated noises. Thus, blind image denoising especially for real images is still a challenging task, since the real noise distribution is difficult to be pre-known (for model-driven MAP approaches) and hard to be comprehensively simulated by training data (for data-driven deep learning approaches). Against this issue, this paper proposes a new variational inference method, aiming at directly inferring both the underlying clean image and the noise distribution from an observed noisy image in a unique Bayesian framework. Specifically, an approximate posterior is presented by taking the intrinsic clean image and noise variances as latent variables conditioned on the input noisy image. This posterior provides explicit parametric forms for all its involved hyper-parameters, and thus can be efficiently implemented for blind image denoising with automatic noise estimation for test noisy images. In summary, this paper mainly makes following contributions: 1) The proposed method is capable of simultaneously implementing both noise estimation and blind image denoising tasks in a unique Bayesian framework. The noise distribution is modeled as a general non-i.i.d. configurations with spatial relevance across the image, which evidently better complies with the heterogeneous real noise beyond the conventional i.i.d. noise assumption. 2) Succeeded from the fine generalization capability of the generative model, the proposed method is verified to be able to effectively estimate and remove complicated non-i.i.d. noises in test images even though such noise types have never appeared in training data, as clearly shown in Fig. 3. 3) The proposed method is a generative approach outputted a complete distribution revealing how the noisy image is generated. This not only makes the result with more comprehensive interpretability beyond traditional methods purely aiming at obtaining a clean image, but also naturally leads to a learnable likelihood (fidelity) term according to the data-self. 4) The most commonly utilized deep learning paradigm, i.e., taking MSE as loss function and training on large noisy-clean image pairs, can be understood as a degenerated form of the proposed generative approach. Their overfitting issue can then be easily explained under this variational inference perspective: these methods intrinsically put dominant emphasis on fitting the priors of the latent clean image, while almost neglects the effect of noise variations. This makes them incline to overfit noise bias on training data and sensitive to the distinct noises in test noisy images. The paper is organized as follows: Section 2 introduces related work. Sections 3 presents the proposed full Bayesion model, the deep variational inference algorithm, the network architecture and some discussions. Section 4 demonstrates experimental results and the paper is finally concluded. Variational Denoising Network for Blind Noise Modeling Given training set D = {y j , x j } n j=1 , where y j , x j denote the j th training pair of noisy and the expected clean images, n represents the number of training images, our aim is to construct a variational parametric approximation to the posterior of the latent variables, including the latent clean image and the noise variances, conditioned on the noisy image. Note that for the noisy image y, its training pair x is generally a simulated "clean" one obtained as the average of many noisy ones taken under similar camera conditions [4,1], and thus is always not the exact latent clean image z. This explicit parametric posterior can then be used to directly infer the clean image and noise distribution from any test noisy image. To this aim, we first need to formulate a rational full Bayesian model of the problem based on the knowledge delivered by the training image pairs. Constructing Full Bayesian Model Based on Training Data Denote y = [y 1 , · · · , y d ] T and x = [x 1 , · · · , x d ] T as any training pair in D, where d (width*height) is the size of a training image 1 . We can then construct the following model to express the generation process of the noisy image y: y i ∼ N (y i |z i , σ 2 i ), i = 1, 2, · · · , d,(1) where z ∈ R d is the latent clean image underlying y, N (·|µ, σ 2 ) denotes the Gaussian distribution with mean µ and variance σ 2 . Instead of assuming i.i.d. distribution for the noise as conventional [28,13,16,41], which largely deviates the spatial variant and signal-depend characteristics of the real noise [45,8], we models the noise as a non-i.i.d. and pixel-wise Gaussian distribution in Eq. (1). The simulated "clean" image x evidently provides a strong prior to the latent variable z. Accordingly we impose the following conjugate Gaussian prior on z: z i ∼ N (z i |x i , ε 2 0 ), i = 1, 2, · · · , d,(2) where ε 0 is a hyper-parameter and can be easily set as a small value. Besides, for σ 2 = {σ 2 1 , σ 2 2 , · · · , σ 2 d }, we also introduce a rational conjugate prior as follows: σ 2 i ∼ IG σ 2 i | p 2 2 − 1, p 2 ξ i 2 , i = 1, 2, · · · , d,(3) where IG(·|α, β) is the inverse Gamma distribution with parameter α and β, ξ = G (ŷ −x) 2 ; p represents the filtering output of the variance map (ŷ −x) 2 by a Gaussian filter with p × p window, andŷ,x ∈ R h×w are the matrix (image) forms of y, x ∈ R d , respectively. Note that the mode of above IG distribution is ξ i [6,42], which is a approximate evaluation of σ 2 i in p × p window. Combining Eqs. (1)-(3), a full Bayesian model for the problem can be obtained. The goal then turns to infer the posterior of latent variables z and σ 2 from noisy image y, i.e., p(z, σ 2 |y). Variational Form of Posterior We first construct a variational distribution q(z, σ 2 |y) to approximate the posterior p(z, σ 2 |y) led by Eqs. (1)-(3). Similar to the commonly used mean-field variation inference techniques, we assume conditional independence between variables z and σ 2 , i.e., q(z, σ 2 |y) = q(z|y)q(σ 2 |y). Based on the conjugate priors in Eqs. (2) and (3), it is natural to formulate variational posterior forms of z and σ 2 as follows: where µ i (y; W D ) and m 2 i (y; W D ) are designed as the prediction functions for getting posterior parameters of latent variable z directly from y. The function is represented as a network, called denoising network or D-Net, with parameters W D . Similarly, α i (y; W S ) and β i (y; W S ) denote the prediction functions for evaluating posterior parameters of σ 2 from y, where W S represents the parameters of the network, called Sigma network or S-Net. The aforementioned is illustrated in Fig. 1. Our aim is then to optimize these network parameters W D and W S so as to get the explicit functions for predicting clean image z as well as noise knowledge σ 2 from any test noisy image y. A rational objective function with respect to W D and W S is thus necessary to train both the networks. q(z|y) = d i N (z i |µ i (y; W D ), m 2 i (y; W D )), q(σ 2 |y) = d i IG(σ 2 i |α i (y; W S ), β i (y; W S )), (5) ( ( | )|| ) ( ( 2 | )|| 2 ) ( , 2 ) [log , 2 ] ℒ( Note that the network parameters W D and W S are shared by posteriors calculated on all training data, and thus if we train them on the entire training set, the method is expected to induce the general statistical inference insight from noisy image to its underlying clean image and noise level. Variational Lower Bound of Marginal Data Likelihood For notation convenience, we simply write µ i (y; W D ), m 2 i (y; W D ), α i (y; W S ), β i (y; W S ) as µ i , m 2 i , α i , β i in the following calculations. For any noisy image y and its simulated "clean" image x in the training set, we can decompose its marginal likelihood as the following form [7]: log p(y) = L(z, σ 2 ; y) + D KL q(z, σ 2 |y)||p(z, σ 2 |y) ,(6) where L(z, σ 2 ; y) = E q(z,σ 2 |y) log p(y|z, σ 2 )p(z)p(σ 2 ) − log q(z, σ 2 |y) ,(7) Here E p(x) [f (x)] represents the exception of f (x) w.r.t. stochastic variable x with probability density function p(x). The second term of Eq. (6) is a KL divergence between the variational approximate posterior q(z, σ 2 |y) and the true posterior p(z, σ 2 |y) with non-negative value. Thus the first term L(z, σ 2 ; y) constitutes a variational lower bound on the logarithm of marginal likelihood p(y), i.e., log p(y) ≥ L(z, σ 2 ; y).(8) According to Eqs. (4), (5) and (7), the lower bound can then be rewritten as: L(z, σ 2 ; y) = E q(z,σ 2 |y) log p(y|z, σ 2 ) − D KL (q(z|y)||p(z)) − D KL q(σ 2 |y)||p(σ 2 ) . (9) It's pleased that all the three terms in Eq (9) can be integrated analytically as follows: E q(z,σ 2 |y) log p(y|z, σ 2 ) = d i=1 − 1 2 log 2π − 1 2 (log βi − ψ(αi)) − αi 2βi (yi − µi) 2 + m 2 i ,(10)DKL (q(z|y)||p(z)) = d i=1 (µi − xi) 2 2ε 2 0 + 1 2 m 2 i ε 2 0 − log m 2 i ε 2 0 − 1 ,(11)DKL q(σ 2 |y)||p(σ 2 ) = d i=1 αi − p 2 2 + 1 ψ(αi) + log Γ p 2 2 − 1 − log Γ(αi) + p 2 2 − 1 log βi − log p 2 ξi 2 + αi p 2 ξi 2βi − 1 ,(12) where ψ(·) denotes the digamma function. Calculation details are listed in supplementary material. We can then easily get the expected objective function (i.e., a negtive lower bound of the marginal likelihood on entire training set) for optimizing the network parameters of D-Net and S-Net as follows: min W D ,W S − n j=1 L(z j , σ 2 j ; y j ).(13) Network Learning As aforementioned, we use D-Net and S-Net together to infer the variational parameters µ, m 2 and α, β from the input noisy image y, respectively, as shown in Fig. 1. It is critical to consider how to calculate derivatives of this objective with respect to W D , W S involved in µ, m 2 , α and β to facilitate an easy use of stochastic gradient varitional inference. Fortunately, different from other related variational inference techniques like VAE [22], all three terms of Eqs. (10)- (12) in the lower bound Eq. (9) are differentiable and their derivatives can be calculated analytically without the need of any reparameterization trick, largely reducing the difficulty of network training. At the training stage of our method, the network parameters can be easily updated with backpropagation (BP) algorithm [15] through Eq. (13). The function of each term in this objective can be intuitively explained: the first term represents the likelihood of the observed noisy images in training set, and the last two terms control the discrepancy between the variational posterior and the corresponding prior. During the BP training process, the gradient information from the likelihood term of Eq. (10) is used for updating both the parameters of D-Net and S-Net simultaneously, implying that the inference for the latent clean image z and σ 2 is guided to be learned from each other. At the test stage, for any test noisy image, through feeding it into D-Net, the final denoising result can be directly obtained by µ. Additionally, through inputting the noisy image to the S-Net, the noise distribution knowledge (i.e., σ 2 ) is easily inferred. Specifically, the noise variance in each pixel can be directly obtained by using the mode of the inferred inverse Gamma distribution: σ 2 i = βi (αi+1) . Network Architecture The D-Net in Fig. 1 takes the noisy image y as input to infer the variational parameters µ and m 2 in q(z|y) of Eq. (5), and performs the denoising task in the proposed variational inference algorithm. In order to capture multi-scale information of the image, we use a U-Net [34] with depth 4 as the D-Net, which contains 4 encoder blocks ([Conv+ReLU]×2+Average pooling), 3 decoder blocks (Transpose Conv+[Conv+ReLU]×2) and symmetric skip connection under each scale. For parameter µ, the residual learning strategy is adopted as in [44], i.e., µ = y + f (y; W D ), where f (·; W D ) denotes the D-Net with parameters W D . As for the S-Net, which takes the noisy image y as input and outputs the predicted variational parameters α and β in q(σ 2 |y) of Eq (5), we use the DnCNN [44] architecture with five layers, and the feature channels of each layer is set as 64. It should be noted that our proposed method is a general framework, most of the commonly used network architectures [45,33,24,46] in image restoration can also be easily substituted. Some Discussions It can be seen that the proposed method succeeds advantages of both model-driven MAP and datadriven deep learning methods. On one hand, our method is a generative approach and possesses fine interpretability to the data generation mechanism; and on the other hand it conducts an explicit prediction function, facilitating efficient image denoising as well as noise estimation directly through an input noisy image. Furthermore, beyond current methods, our method can finely evaluate and remove non-i.i.d. noises embedded in images, and has a good generalization capability to images with complicated noises, as evaluated in our experiments. This complies with the main requirement of the blind image denoising task. If we set the hyper-parameter ε 2 0 in Eq.(2) as an extremely small value close to 0, it is easy to see that the objective of the proposed method is dominated by the second term of Eq. (10), which makes ; W D ) − x j || 2 . This provides a new understanding to explain why they incline to overfit noise bias in training data. The posterior inference process puts dominant emphasis on fitting priors imposed on the latent clean image, while almost neglects the effect of noise variations. This naturally leads to its sensitiveness to unseen complicated noises contained in test images. Very recently, both CBDNet [17] and FFDNet [45] are presented for the denoising task by feeding the noisy image integrated with the pre-estimated noise level into the deep network to make it better generalize to distinct noise types in training stage. Albeit more or less improving the generalization capability of network, such strategy is still too heuristic and is not easy to interpret how the input noise level intrinsically influence the final denoising result. Comparatively, our method is constructed in a sound Bayesian manner to estimate clean image and noise distribution together from the input noisy image, and its generalization can be easily explained from the perspective of generative model. Experimental Results We evaluate the performance of our method on synthetic and real datasets in this section. All experiments are evaluated in the sRGB space. We briefly denote our method as VDN in the following. The training and testing codes of our VDN is available at https://github.com/zsyOAOA/VDNet. Experimental Setting Network training and parameter setting: The weights of D-Net and S-Net in our variational algorithm were initialized according to [18]. In each epoch, we randomly crop N = 64 × 5000 patches with size 128 × 128 from the images for training. The Adam algorithm [21] is adopted to optimize the network parameters through minimizing the proposed negative lower bound objective. The initial learning rate is set as 2e-4 and linearly decayed in half every 10 epochs until to 1e-6. The window size p in Eq. (3) is set as 7. The hyper-parameter ε 2 0 is set as 5e-5 and 1e-6 in the following synthetic and real-world image denoising experiments, respectively. Comparison methods: Several state-of-the-art denoising methods are adopted for performance comparison, including CBM3D [11], WNNM [16], NCSR [14], MLP [10], DnCNN-B [44], Mem-Net [38], FFDNet [45], UDNet [24] and CBDNet [17]. Note that CBDNet is mainly designed for blind denoising task, and thus we only compared CBDNet on the real noise removal experiments. (a) (c) (b) (d) (e) (f ) Experiments on Synthetic Non-I.I.D. Gaussian Noise Cases Similar to [45], we collected a set of source images to train the network, including 432 images from BSD [5], 400 images from the validation set of ImageNet [12] and 4744 images from the Waterloo Exploration Database [26]. Three commonly used datasets in image restoration (Set5, LIVE1 and BSD68 in [20]) were adopted as test datasets to evaluate the performance of different methods. In order to evaluate the effectiveness and robustness of VDN under the non-i.i.d. noise configuration, we simulated the non-i.i.d. Gaussian noise as following, n = n 1 ⊙ M , n 1 ij ∼ N (0, 1),(14) where M is a spatially variant map with the same size as the source image. We totally generated four kinds of M s as shown in Fig. 2. The first ( Fig. 2 (a)) is used for generating noisy images of training data and the others (Fig. 2 (b)-(d)) generating three groups of testing data (denotes as Cases 1-3). Under this noise generation manner, the noises in training data and testing data are with evident difference, suitable to verify the robustness and generalization capability of competing methods. Comparson with the State-of-the-art: Table 1 lists the average PSNR results of all competing methods on three groups of testing data. From Table 1, it can be easily observed that: 1) The VDN outperforms other competing methods in all cases, indicating that VDN is able to handle such complicated non-i.i.d. noise; 2) VDN surpasses FFDNet about 0.25dB averagely even though FFDNet depends on the true noise level information instead of automatically inferring noise distribution as our method; 3) the discriminative methods MLP, DnCNN-B and UDNet seem to evidently overfit on training noise bias; 4) the classical model-driven method CBM3D performs more stably than WNNM and NCSR, possibly due to the latter's improper i.i.d. Gaussian noise assumption. Fig. 3 shows the denoising results of different competing methods on one typical image in testing set of Case 2, and more denoising results can be found in the supplementary material. Note that we only display the top four best results from all due to page limitation. It can be seen that the denoised images by CBM3D and DnCNN-B still contain obvious noise, and FFDNet over-smoothes the image and loses some edge information, while our proposed VDN removes most of the noise and preserves more details. Even though our VDN is designed based on the non-i.i.d. noise assumption and trained on the non-i.i.d. noise data, it also performs well on additive white Gaussian noise (AWGN) removal task. Table 2 lists the average PSNR results under three noise levels (σ = 15, 25, 50) of AWGN. It is easy to see that our method obtains the best or at least comparable performance with the state-of-the-art method FFDNet. Combining Table 1 and Table 2, it should be rational to say that our VDN is robust and able to handle a wide range of noise types, due to its better noise modeling manner. Noise Variance Prediction: The S-Net plays the role of noise modeling and is able to infer the noise distribution from the noisy image. To verify the fitting capability of S-Net, we provided the M Experiments on Real-World Noise In this part, we evaluate the performance of VDN on real blind denoising task, including two banchmark datasets: DND [31] and SIDD [1]. DND consists of 50 high-resolution images with realistic noise from 50 scenes taken by 4 consumer cameras. However, it does not provide any other additional noisy and clean image pairs to train the network. SIDD [1] is another real-world denoising benchmark, containing 30, 000 real noisy images captured by 5 cameras under 10 scenes. For each noisy image, it estimates one simulated "clean" image through some statistical methods [1]. About 80% (∼ 24, 000 pairs) of this dataset are provided for training purpose, and the rest as held for benchmark. And 320 image pairs selected from them are packaged together as a medium version of SIDD, called SIDD Medium Dataset 2 , for fast training of a denoiser. We employed this medium vesion dataset to train a real-world image denoiser, and test the performance on the two benchmarks. Table 3 lists PSNR results of different methods on SIDD benchmark 3 . Note that we only list the results of the competing methods that are available on the official benchmark website 2 . It is evident that VDN outperforms other methods. However, note that neither DnCNN-B nor CBDNet performs well, possibly because they were trained on the other datasets, whose noise type is different from SIDD. For fair comparison, we retrained DnCNN-B and CBDNet based on the SIDD dataset. The performance on the SIDD validation set is also listed in Table 3 Table 4 lists the performance of all competing methods on the DND benchmark 4 . From the table, it is easy to be seen that our proposed VDN surpasses all the competing methods. It is worth noting that CBDNet has the same optimized network with us, containing a S-Net designed for estimating the noise distribution and a D-Net for denoising. The superiority of VDN compared with CBDNet mainly benefits from the deep variational inference optimization. For easy visualization, on one typical denoising example, results of the best four competing methods are displayed in Fig. 4. Obviously, WNNM is ubable to remove the complex real noise, maybe because the low-rankness prior is insufficient to describe all the image information and the IID Gaussian noise assumption is in conflict with the real noise. With the powerful feature extraction ability of CNN, DnCNN and CBDNet obtain much better denoising results than WNNM, but still with a little noise. However, the denoising result of our proposed VDN has almost no noise and is very close to the groundtruth. In Fig. 5, we displayed the noise variance map predicted by S-Net on the two real benchmarks. The variance maps had been enlarged several times for easy visualization. It is easy to see that the predicted noise variance map relates to the image content, which is consistent with the well-known signal-depend property of real noise to some extent. Hyper-parameters Analysis The hyper-parameter ε 0 in Eq. (2) determines how much does the desired latent clean image z depend on the simulated groundtruth x. As discussed in Section 3.6, the negative variational lower bound degenerates into MSE loss when ε 0 is setted as an extremely small value close to 0. The performance of VDN under different ε 0 values on the SIDD validation dataset is listed in Table 5. For explicit comparison, we also directly trained the D-Net under MSE loss as baseline. From Table 5, we can see that: 1) when ε 0 is too large, the proposed VDN obtains relatively worse results since the prior constraint on z by simulated groundtruth x becomes weak; 2) with ε 0 decreasing, the performance of VDN tends to be similar with MSE loss as analysised in theory; 3) the results of VDN surpasses MSE loss about 0.3 dB PSNR when ε 2 0 = 1e-6, which verifies the importantance of noise modeling in our method. Therefore, we suggest that the ε 2 0 is set as 1e-5 or 1e-6 in the real-world denoising task. In Eq. (3), we introduced a conjugate inverse gamma distribution as prior for σ 2 . The mode of this inverse gamma distribution ξ i provides a rational approximate evaluation for σ 2 i , which is a local estimation in a p × p window centered at the i th pixel. We compared the performance of VDN under different p values on the SIDD validation dataset in Table 6. Empirically, VDN performs consistently well for the hyper-parameter p. Conclusion We have proposed a new variational inference algorithm, namely varitional denoising network (VDN), for blind image denoising. The main idea is to learn an approximate posterior to the true posterior with the latent variables (including clean image and noise variances) conditioned on the input noisy image. Using this variational posterior expression, both tasks of blind image denoising and noise estimation can be naturally attained in a unique Bayesian framework. The proposed VDN is a generative method, which can easily estimate the noise distribution from the input data. Comprehensive experiments have demonstrated the superiority of VDN to previous works on blind image denoising. Our method can also facilitate the study of other low-level vision tasks, such as super-resolution and deblurring. Specifically, the fidelity term in these tasks can be more faithfully set under the estimated non-i.i.d. noise distribution by VDN, instead of the traditional i.i.d. Gaussian noise assumption.
4,560
1908.11057
2970096436
Textual network embeddings aim to learn a low-dimensional representation for every node in the network so that both the structural and textual information from the networks can be well preserved in the representations. Traditionally, the structural and textual embeddings were learned by models that rarely take the mutual influences between them into account. In this paper, a deep neural architecture is proposed to effectively fuse the two kinds of informations into one representation. The novelties of the proposed architecture are manifested in the aspects of a newly defined objective function, the complementary information fusion method for structural and textual features, and the mutual gate mechanism for textual feature extraction. Experimental results show that the proposed model outperforms the comparing methods on all three datasets.
Text Embedding There has been various methods to embed textual information into vector representations for NLP tasks. The classical method for embedding textual information could be one-hot vector, term frequency inverse document frequency (TF-IDF), etc. Due to the high-dimension and sparsity problems in here, @cite_18 proposed a novel neural network based skip-gram model to learn distributed word embeddings via word co-occurrences in a local window of textual content. To exploit the internal structure of text, convolutional neural networks (CNNs) @cite_4 @cite_10 is applied to obtain latent features of local textual content. Then, by following a pooling layer, fixed-length representations are generated. To have the embeddings better reflect the correlations among texts, soft attention mechanisms @cite_3 @cite_11 is proposed to calculate the relative importances of words in a sentence by evaluating their relevances to the content of comparing sentences. Alternatively, gating mechanism is applied to strengthen the relevant textual information, while weakening the irrelevant one by controlling the information-flow path of a network in @cite_17 @cite_20 .
{ "abstract": [ "We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.", "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.", "The pre-dominant approach to language modeling to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a finite context approach through stacked convolutions, which can be more efficient since they allow parallelization over sequential tokens. We propose a novel simplified gating mechanism that outperforms (2016b) and investigate the impact of key architectural decisions. The proposed approach achieves state-of-the-art on the WikiText-103 benchmark, even though it features long-term dependencies, as well as competitive results on the Google Billion Words benchmark. Our model reduces the latency to score a sentence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the first time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.", "We propose a selective encoding model to extend the sequence-to-sequence framework for abstractive sentence summarization. It consists of a sentence encoder, a selective gate network, and an attention equipped decoder. The sentence encoder and decoder are built with recurrent neural networks. The selective gate network constructs a second level sentence representation by controlling the information flow from encoder to decoder. The second level representation is tailored for sentence summarization task, which leads to better performance. We evaluate our model on the English Gigaword, DUC 2004 and MSR abstractive sentence summarization datasets. The experimental results show that the proposed selective encoding model outperforms the state-of-the-art baseline models.", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature." ], "cite_N": [ "@cite_18", "@cite_4", "@cite_17", "@cite_3", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "1614298861", "2120615054", "2963970792", "2133564696", "1832693441", "2609482285", "2963403868" ] }
A Deep Neural Information Fusion Architecture for Textual Network Embeddings
Networks provide an effective way to organize heterogeneous relevant data, which can often be leveraged to facilitate downstream applications. For example, the huge amount of textual and relationship data in social networks contains abundant information on people's preferences, and thus can be used for personalized advertising and recommendation. To this end, traditionally a matrix representing the network structure is often built first, then subsequent tasks proceed. However, matrix methods are computationally expensive and cannot be applied to large-scale networks. Network embedding (NE) maps every node of a network into a low-dimensional vector, while seeking to retain the original network information. Subsequent tasks (e.g. similar vertices search, linkage prediction) can proceed by leveraging these low-dimensional features. To obtain * Corresponding author. network embeddings, (Perozzi et al., 2014) proposed to first generate sequences of nodes by randomly walking along connected nodes. Word embedding methods are then employed to produce the embeddings for nodes by noting the analogies between node sequences and sentences in natural languages. Second-order proximity information is further taken into account in (Tang et al., 2015). To gather the network connection information more efficiently, the random walking strategy in (Perozzi et al., 2014) is modified to favor the important nodes in (Grover and Leskovec, 2016). However, all these methods only took the network structure into account, ignoring the huge amount of textual data. In Twitter social network, for example, tweets posted by a user contain valuable information on the user's preferences, political standpoints, and so on (Bandyopadhyay et al., 2018). To include textual information into the embeddings, (Tu et al., 2017) proposed to first learn embeddings for the textual data and network structure respectively, and then concatenate them to obtain the embeddings of nodes. The textual and structural embeddings are learned with an objective that encourages embeddings of neighboring nodes to be as similar as possible. Attention mechanism is further employed to highlight the important textual information by taking the impacts of texts from neighboring nodes into account. Later, (Shen et al., 2018b) proposed to use the fine-grained word alignment mechanism to replace the attention mechanism in (Tu et al., 2017) in order to absorb the impacts from neighboring texts more effectively. However, both methods require the textual and structural embeddings from neighboring nodes to be as close as possible even if the nodes share little common contents. This could be problematic since a social network user may be connected to users who post totally different viewpoints because of different political standpoints. If two nodes are similar, it is the node embeddings, rather than the individual textual or structural embeddings, that should be close. Forcing representations of dissimilar data to be close is prone to yield bad representations. Moreover, since the structural and textual embeddings contain some common information, if they are concatenated directly, as done in (Tu et al., 2017;Shen et al., 2018b), the information contained in the two parts is entangled in some very complicated way, increasing the difficulties of learning representative network embeddings. In this paper, we propose a novel deep neural Information Fusion Architecture for textual Network Embedding (NEIFA) to tackle the issues mentioned above. Instead of forcing the separate embeddings of structures and texts from neighboring nodes to be close, we define the learning objective based on the node embeddings directly. For the problem of information entanglement, inspired by the gating mechanism of long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997), we extract the complementary informations from texts and structures and then use them to constitute the node embeddings. A mutual gate is further designed to highlight the node's textual information that is consistent with neighbors' textual contents, while diminishing those that contracdict to each other. In this way, the model provides a mechanism to only allow the information that is consistent among neighboring nodes to flow into the node embeddings. The proposed network embedding method is evaluated on the tasks of link prediction and vertex classification, using three realworld datasets from different domains. It is shown that the proposed method outperforms state-ofthe-art network embedding methods on the task of link prediction by a substantial margin, demonstrating that the obtained embeddings well retain the information in original networks. Similar phenomenons can also be observed in the vertex classification task. These results suggest the effectiveness of the proposed neural information fusion architecture for textual network embeddings. Network Embedding Network embedding methods can be categorized into two classes: (1) methods that solely utilize structure information; and (2) methods that consider both structure and textual content associated with vertices. For the first type of methods, DeepWalk (Perozzi et al., 2014) was the first to introduce neural network technique into the network embedding field. In DeepWalk, node sequences are generated via randomly walking on the network, dense latent representations are by feeding those node sequences into the skip-gram model. LINE (Tang et al., 2015) exploited the first-order and second-order proximity information of vertices in network by optimizing the joint and condition probability of edges. Further, Node2Vec (Grover and Leskovec, 2016) proposed a biased random walk to search a network and generate node sequences based on the depth-first search and width-first search. However, those methods only embed the structure information into vector representations, while ignoring the informative textual contents associated with vertices. To address this issue, some recent work seeks to the joint impact of structure and textual contents to obtain better representations. TADW (Yang et al., 2015) proved that DeepWalk is equivalent to the matrix factorization and the textual information can be incorporated by simply adding the textual feature into the matrix factorization. CENE (Sun et al., 2016) works by transforming the textual content into another kinds of vertices, and the vertices are embedded into low-dimensional representations on the extended network. CANE (Tu et al., 2017) proposed to learn separate embeddings for the textual and structural information, and obtain the network embeddings by simply concatenating them, in which a mutual attention mechanism is used to model the semantic relationship between textual contents. WANE (Shen et al., 2018b) modified the semantic extraction strategy in CANE by introducing a fine-grained word alignment technique to learn word-level semantic information more effectively. However, most of recent methods force the textual and structural embeddings of two neighboring nodes close to each other irrespective of their underlying contents. The Proposed Method A textual network is defined as G = {V , E, T }, where V , E and T denote the vertices in the graph, edges between vertices and textual content associated with vertices, respectively. Each edge e i,j ∈ E suggests there is a relationship between vertex v i and v j . Training Objective Suppose the structural and textual features of node i are given and are denoted as s i and t i , respectively. Existing methods are built on the objectives that encourage the structural and textual features of neighboring nodes to be as similar as possible. As discussed in the previous sections, this may make the node embeddings deviate from the true information in the nodes. In this paper, we define the objective based on the node embeddings directly, that is, L = {i,j}∈E log p(h i |h j ),(1) where h i is the network embedding of node i, and is constructed from s i and t i by h i = F(s i , t i );(2) F(·, ·) is the fusion function that maps the structural and textual features into the network embeddings; and p(h i |h j ) denotes the conditional probability of network embedding h i given the network embedding h j . Following LINE (Tang et al., 2015), the conditional probability in (1) is defined as: p(h i |h j ) = exp(h i · h j ) z∈V exp(h i · h z ) . (3) Note that the structural feature s i is randomly initialized and will be learned along with the other model parameters. For the textual feature t i , it is obtained via a trainable feature extraction function from the given texts, i.e., t i = T (x i ),(4) where x i represents the texts associated with node i. From the definition of objective function (1), it can be seen that it is the network embeddings of nodes, rather than the individual structural or textual embeddings, that are encouraged to be close for neighboring nodes. Details on how to realize the fusion function F(·, ·) and feature extraction function T (·) are deferred to Section 3.2 and Section 3.3, respectively. The overall framework of our proposed NEIFA is shown in Fig.1. Fusion of Structural and Textual Features F(s i , t i ) In this section, we will present how to fuse the structural and textual features to yield the embeddings for nodes. The fusion module F(s i , t i ) is illustrated in Fig.2. The simplest way to obtain network embeddings is to concatenate them di- rectly, i.e. h i = [s i ; t i ]. However, it is known that the structural and textual features are not fully exclusive, and often contain some common information. Thus, if the network embeddings are generated by simply concatenating the two features, different parts of the embeddings become entangled to each other in some unknown but complex way. This may make the process of optimizing the objective function more difficult and hinder the model to learn representative embeddings for the nodes. In this paper, we instead distill the information that is complementary to the textual feature t i from s i first, and then concatenate the two complementary information to constitute the embeddings of nodes. To distill the complementary information from the structural feature s i , inspired by LSTM, an input gate is designed to eliminate the information in s i that has already appeared in t i . Specifically, the gate is designed as g i = 1 − σ((Ps i + b g ) t i ),(5) where σ(·) is the sigmoid function; denotes the element-wise multiplication; P and b g are used to align the structural feature s i to the space of textual features t i . From the definition of g i , it can be seen that if the values on some specific dimension of Ps i + b g and t i are both large, which indicates the same information appears in both s i and t i , the gate g i will be closed. So, if (Ps i + b g ) t i is multiplied to the gate g i , only the information that is not contained in both s i and t i is allowed to pass through. Thus, ((Ps i + b g ) t i ) g i can be un-derstood as the information in s i that is complementary to t i . In practice, we untie the values of P and b g , and use a new trainable matrix Q and bias b c instead. The complementary information is eventually computed as z i = ((Qs i + b c ) t i ) g i .(6) Then, we concatenate complementary information z i to the textual features t i to produce the final network embedding h i = [z i ; t i ].(7) In this way, given the structural and textual features s i and t i , we successfully extract the complementary information from s i and generate the final network embedding h i . Textual Feature Extraction T (x i ) When extracting textual features for the embeddings of nodes, the impacts from neighboring nodes should also be taken into account, i.e. highlighting the consistent information, while dampening the inconsistent ones. To this end, we first repersent words with their corresponding embeddings, and then apply a one-layer CNN followed by an average pooling operator to extract the raw features for texts (Tu et al., 2017;Shen et al., 2018a). Given the raw textual features r i and r j of two neighboring nodes i and j, we diminish the information that are not consistent in the two raw features. Specifically, the final textual features are computed for nodes i and j as t i = r i σ(r j ),(8)t j = r j σ(r i ),(9) where σ(·) serves as the role of gating. Since the raw textual feature r i often exhibits specific meanings on different dimensions, the expressions (8) and (9) can be understood as a way to control which information is allowed to flow into the embeddings. Only the information that is consitent among neighboring nodes can appear in the textual feature t i , which is then fused into the network embeddings. There are a variety of other nonlinear functions that can serve as the role of gating, but in this work, the simplest but effective sigmoid function is employed. Training Details Maximizing the objective function in (1) requires to compute the expensive softmax function repeatedly, in which the summation over all nodes of the networks is needed for iteration. To address this issue, for each edge e i,j ∈ E, we introduce negative sampling (Mikolov et al., 2013b) to simplify the optimization process. Therefore, the conditional distribution p(h i |h j ) into the following form: log σ(h i · h j ) + K k=1 E h k ∼P (h) [log σ(−h k · h i )](10) Experiments To evaluate the quality of the network embeddings generated by the proposed method, we apply them in two tasks: link prediction and vertex classification. Link prediction aims to predict whether there exists a link between two randomly chosen nodes based on the similarities of embeddings of the two nodes. Vertex classification, on the other hand, tries to classify the nodes into different categories based on the embeddings, provided that there exists some supervised information. Both tasks can achieve good performances only when the embeddings retain important information of the nodes, including both the structural and textual information. In the following, we will first introduce the datasets and baselines used in this paper, then describe the evaluation metric and experimental setups, and lastly report the performance of the proposed model on the tasks of link prediction and vertex classification, respectively. Datasets and Baselines Experiments are conducted on three real-world datasets: Zhihu (Sun et al., 2016), Cora (McCallum et al., 2000) and HepTh (Leskovec et al., 2005). Below shows the detailed descriptions of the three datasets, with their statistics summaries given in Table 1. The preprocessing procedure of the above datasets is the same as that in (Tu et al., 2017) 1 . • Zhihu (Sun et al., 2016) is a Q&A based community social network. In our experiment, 10000 active users and the descriptions of their interested topics are collected as the vertices and texts of the social network to be studied. There are total 43894 edges which indicate the relationship between active users. • Cora (McCallum et al., 2000) is a citation network that consists of 2277 machine learning papers with text contents divided into 7 categories. The citation relations among the papers are reflected in the 5214 edges. • HepTh (Leskovec et al., 2005) (High Energy Physics Theory) is a citation network from the e-print arXiv. In our experiment, 1038 papers with abstract information are collected, among which 1990 edges are observed. To evaluate the effectiveness of our proposed model, several strong baseline methods are compared with, which are divided into two categories as follows: • Structure-only: DeepWalk (Perozzi et al., 2014), LINE (Tang et al., 2015), Node2vec (Grover and Leskovec, 2016). • Structure and Text: TADW (Yang et al., 2015), CENE (Sun et al., 2016), CANE (Tu et al., 2017), WANE (Shen et al., 2018b). Evaluation Metrics and Experimental Setups In link prediction, the performance criteria of area under the curve ( (Tu et al., 2017) and (Shen et al., 2018b), respectively. 1982) is used, which represents the probability that vertices in a random unobserved link are more similar than those in a random non-existent link. For the vertex classification task, a logistic regression model is first trained to classify the embeddings into different categories based on the provided labels of nodes. Then, the trained model is used to classify the network embeddings in test set, and the classification accuracy is used as the performance criteria of this task. To have a fair comparison with competitive methods, the dimension of network embeddings is set to 200 for all considered methods. The number of negative samples is set to 1 and the mini-batch size is set to 64 to speed up the training processes. Adam (Kingma and Ba, 2014) is employed to train our model with a learning rate of 1 × 10 −3 . Link Prediction We randomly extract a portion of edges from the whole edges to constitute the training datasets, and use the rest as the test datasets. The AUC scores of different models under proportions ranging from 15% to 95% on Zhihu, Cora and HepTh datasets are shown in Table 2, Table 3 and Table 4, respectively, with the best performance highlighted in bold. As can be seen from Table 2, our proposed method outperforms all other baselines in Zhihu dataset substantially, with approximately a 10 percent improvement over the current state-of-the-art WANE model. This may be partially attributed to the complicated Zhihu dataset, in which both the structures and texts contain important informations. If the two individual features are concatenated directly, there may be sever information overlapping problem, limiting the models to learning good embeddings. The proposed complementary information fusion method alleviate the issue by disentangling the structural and textual features. In adition, the proposed mutual gate mechanism that removes inconsistent textual information from a node's textual feature also contribute to the performance gains. On the other hand, the substantial gain may also be partially attributed to the objective function that is directly defined on the % Training Edges 15% 25% 35% 45 % 55% 65% 75% 85% 95% (Tu et al., 2017) and (Shen et al., 2018b), respectively. network embeddings. That is because the inconsistencies of the structural or textual information among neighboring nodes are more likely to happen in complex networks. For the other two datasets, as shown in Table 3 and Table 4, our proposed method outperforms baseline methods overall. The results strongly demonstrate that the network embeddings generated by the proposed model are easier to preserve the original information in the nodes. It can be also seen that the performance gains observed in the Cora and HepTh datasets are not as substantial as that in Zhihu dataset. The relatively small improvement may be attributed to the fact that the number of edges and neighbors in Cora and HepTh datasets are much smaller that Zhihu datasets. We speculate that the information in structures of the two datasets is far less than that in texts, implying that the overlapping issue is not as sever as that in Zhihu. Hence, direct concatenation will not induce significant performance loss. Ablation Study To demonstrate the effectiveness of proposed fusion method and mutual gate mechanism, three variants of the proposed model are evaluated: (1)NEIFA(w/o FM): NEIFA with-out both fusion process and mutual gated mechanism where the raw textual features r are directly regarded as network embeddings. (2) NEIFA(w/o F): NEIFA without fusion process where the textual features t are directly regarded as network embeddings. (3) NEIFA(w/o M): NEIFA without mutual gated mechanism where the network embeddings are obtained by fusing the structural features and raw textual features. The three variants are compared with original NEIFA model on the three datasets above. The results are showed in Fig.3. It can be seen that for networks with very sparse structure, such as Hepth, the method that simply uses the raw textual features as their network embeddings can achieve pretty good performance. In the simple datasets, the proposed model even exhibits worse performance in the case of small proportion of training edges. As the datasets become larger and more complex network structure is included, the performance of only using the textual embeddings decreases rapidly. The reason may be that as the networks grow, the differences of structural or textual data among neighboring nodes become more apparent, and the advantages of the mutual gate mechanism and information fu- Vertex Classification To demonstrate the superiority of proposed method, the vertex classification experiment is also considered on the Cora dataset. This experiment is established on the basis that if the original network contains different types of nodes, good embeddings mean that they can be classified into specific classes by a simple classifier easily. For the proposed method, the embedding of a node varies as it interacts with different nodes. To have the embedding fixed, we follow procedures in (Tu et al., 2017) to yield a node's embedding by averaing the embeddings that are obtained when the node interacts with different neighbors. To this end, we randomly split the node embeddings of all nodes with a proportion of 50%-50% into a training and testing set, respectively. A logistic regression classifier regularized by L 2 distance (Fan et al., 2008) is then trained on the node embeddings from training set. The classification performance is tested on the hold-out testing set. The above procedures are repeated 10 times and their average value is reported as the final performance. It can be seen from Fig.4 that methods considering both structural and textual information show better classification accuracies than methods leveraging only structural information, demonstrating the importance of incorporating textual information into the embeddings. Moreover, NEIFA outperforms all methods considered, which further proves the superiority of our proposed model. To intuitively understand the embeddings produced by the proposed model, we employ t-SNE to map our learned embeddings to a 2-D space. The result is shown in Fig.5, where different colors indicate that the nodes belong to different cat- egories. Note that, although the mapping in t-SNE is trained without using any category labels, the latent label information is still partially extracted out. As shown in Fig.5, the points with the same color are closer to each other, while the ones with different colors are far apart. Conclusions In this paper, a novel deep neural architecture is proposed to effectively fuse the structural and textual informations in networks. Unlike existing embeddings methods which encourage both textual and structural embeddings of two neighboring nodes close to each other, we define the training objective based on the node embeddings directly. To address the information duplication problem in the structural and textual features, a complementary information fusing method is further developed to fuse the two features. Besides, a mutual gate is designed to highlight the textual information in a node that is consistent with the textual contents of neighboring nodes, while diminishing those that are conflicting to each other. Exhaustive experimental results on several tasks manifest the advantages of our proposed model.
3,810
1908.10797
2971306187
Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded devices infeasible. Driven by this problem, many works have proposed a number of pruning methods to reduce the sizes of the RNN model. In this work, we propose an end-to-end pruning method for image captioning models equipped with visual attention. Our proposed method is able to achieve sparsity levels up to 97.5 without significant performance loss relative to the baseline (around 1 loss at 40x compression of GRU model). Our method is also simple to use and tune, facilitating faster development times for neural network practitioners. We perform extensive experiments on the popular MS-COCO dataset in order to empirically validate the efficacy of our proposed method.
Modern neural networks that provide good performance tend to be large and overparameterised, fuelled by observations that larger @cite_38 @cite_6 @cite_46 networks tend to be easier to train. This in turn drives numerous efforts to reduce model size using techniques such as weight pruning and quantisation @cite_34 @cite_5 @cite_31 .
{ "abstract": [ "Convexity has recently received a lot of attention in the machine learning community, and the lack of convexity has been seen as a major disadvantage of many learning algorithms, such as multi-layer artificial neural networks. We show that training multi-layer neural networks in which the number of hidden units is learned can be viewed as a convex optimization problem. This problem involves an infinite number of variables, but can be solved by incrementally inserting a hidden unit at a time, each time finding a linear classifier that minimizes a weighted sum of errors.", "We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32 ( ) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58 ( ) faster convolutional operations (in terms of number of the high precision operations) and 32 ( ) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than (16 , ) in top-1 accuracy. Our code is available at: http: allenai.org plato xnornet.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "We introduce a method to train Quantized Neural Networks (QNNs) -- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51 top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.", "Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.", "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN." ], "cite_N": [ "@cite_38", "@cite_31", "@cite_6", "@cite_5", "@cite_46", "@cite_34" ], "mid": [ "2167967601", "2300242332", "1821462560", "2524428287", "2950220847", "2963114950" ] }
Image Captioning with Sparse Recurrent Neural Network
Automatically generating a caption that describes an image, a problem known as image captioning, is a challenging problem where computer vision (CV) meets natural language processing (NLP). A well performing model not only has to identify the objects in the image, but also capture the semantic relationship between them, general context and the activities that they are involved in. Lastly, the model has to map the visual representation into a fully-formed sentence in a natural language such as English. A good image captioning model can have many useful applications, which include helping the visually impaired to better understand the web contents, providing descriptive annotations of website contents, and enabling better context-based image retrieval by tagging images with accurate natural language descriptions. Driven by user privacy concerns and the quest for lower user-perceived latency, deployment on edge devices away from remote servers is required. As edge devices usually have limited battery capacity and thermal limits, this presents a few key challenges in the form of storage size, power consumption and computational demands [1]. For models incorporating RNNs, on-device inference is often memory bandwidth-bound. As RNN parameters are fixed at every time step, parameter reading forms the bulk of the work [2,1]. As such, RNN pruning offers the opportunity to not only reduce the amount of memory access but also fitting the model in on-chip SRAM cache rather than off-chip DRAM memory, both of which dramatically reduce power consumption [3,4]. Similarly, sparsity patterns for pruned RNNs are fixed across time steps. This offers the potential to factorise scheduling and load balancing operations outside of the loop and enable reuse [2]. Lastly, pruning allows larger RNNs to be stored in memory and trained [2,5] In this work, we propose a one-shot end-to-end pruning method to produce very sparse image captioning decoders (up to 97.5% sparsity) while maintaining good performance relative to the dense baseline model as well as competing methods. We detail our contributions in the following section (Sec. 2.2). Model pruning Modern neural networks that provide good performance tend to be large and overparameterised, fuelled by observations that larger [6,7,8] networks tend to be easier to train. This in turn drives numerous efforts to reduce model size using techniques such as weight pruning and quantisation [9,10,11]. Early works like [12] and [13] explored pruning by computing the Hessian of the loss with respect to the parameters in order to assess the saliency of each parameter. Other works involving saliency computation include [14] and [15] where sensitivity of the loss with respect to neurons and weights are used respectively. On the other hand, works such as [16,17] directly induce network sparsity by incorporating sparsity-enforcing penalty terms into the loss function. Most of the recent works in network pruning focused on vision-centric classification tasks using Convolutional Neural Networks (CNNs) and occasionally RNNs. Techniques proposed include magnitude-based pruning [3,4,18] and variational pruning [19,20,21]. Among these, magnitude-based weight pruning have become popular due to their effectiveness and simplicity. Most notably, [3] employed a combination of pruning, quantization and Huffman encoding resulting in massive reductions in model size without affecting accuracy. While unstructured sparse connectivity provides reduction in storage size, it requires sparse General Matrix-Matrix Multiply (GEMM) libraries such as cuSPARSE and SPBLAS in order to achieve accelerated inference. Motivated by existing hardware architectures optimised for dense linear algebra, many works propose techniques to prune and induce sparsity in a structured way in which entire filters are removed [22,23,24]. On the other hand, works extending connection pruning to RNN networks are considerably fewer [25,2,1,26]. See et al. [25] first explored magnitude-based pruning applied to deep multi-layer neural machine translation (NMT) model with Long-Short Term Memory (LSTM) [27]. In their work, three pruning schemes are evaluated which include class-blind, class-uniform and class-distribution. Class-blind pruning was found to produce the best result compared to the other two schemes. Narang et al. [2] introduced a gradual magnitude-based pruning scheme for speech recognition RNNs whereby all the weights in a layer less than some chosen threshold are pruned. Gradual pruning is performed in parallel with network training while pruning rate is controlled by a slope function with two distinct phases. This is extended by Zhu and Gupta [1] who simplified the gradual pruning scheme with reduced hyperparameters. Our contribution Our proposed end-to-end pruning method possesses three main qualities: i) Simple and fast. Our approach enables easy pruning of the RNN decoder equipped with visual attention, whereby the best number of weights to prune in each layer is automatically determined. Compared to works such as [1,2], our approach is simpler with 1 to 2 hyperparameters versus 3 to 4 hyperparameters. Our method also does not rely on reinforcement learning techniques such as in the work of [28]. Moreover, our method applies pruning to all the weights in the RNN decoder and does not require special considerations to exclude pruning from certain weight classes. Lastly our method completes pruning in a single-shot process rather than requiring iterative train-and-prune process as in [29,30,31,32]. ii) Good performance-to-sparsity ratio enabling very high sparsity. Our approach achieves good performance across sparsity levels from 80% up until 97.5% (40× reduction in Number of Non-zeros (NNZ) parameters). This is in contrast with competing methods [1,25] where there is a significant performance drop-off starting at sparsity level of 90%. iii) Easily tunable sparsity level. Our approach provides a way for neural network practitioners to easily control the level of sparsity and compression desired. This allows for model solutions that are tailored for each particular scenario. In contrast, while the closely related works of [33,34] also provide good performance with the incorporation of gating variables, there is not a straightforward way of controlling the final sparsity level. In their works, regularisers such as bi-modal, l 2 , l 1 and l 0 regulariser are used to encourage network sparsity. Their work also only focuses on image classification using CNNs. While there are other works on compressing RNNs, most of the methods proposed either comes with structural constraints or are complementary to model pruning in principle. Examples include using low-rank matrix factorisations [35,36], product quantisation on embeddings [37], factorising word predictions into multiple time steps [38,39,40], and grouping RNNs [41]. Lastly, another closely related work by [30] also incorporated model pruning into image captioning. However we note three notable differences: 1) their work is focused on proposing a new LSTM cell structure named the H-LSTM; 2) their work utilises the grow-and-prune (GP) method [31] which necessitates compute and time expensive iterative pruning; and 3) the compression figures stated are calculated based on the size of the LSTM cells instead of the entire decoder. Proposed Method Our proposed method involves incorporating learnable gating parameters into regular image captioning framework. We denote weight, bias and gating matrices as W , B and G respectively. For a model with L layers, the captioning and gating parameters are denoted as θ and φ such that θ = {W 1:L , B 1:L } and φ = {G 1:L }. As there are substantial existing works focusing on pruning CNNs, we focus our efforts on pruning generative RNNs. As such, we only prune the RNN decoder. All model size calculations in this work include only the decoder (including attention module) while the encoder (i.e. CNN) is excluded. Image captioning with visual attention Our image captioning framework of interest is a simplified variant of the Show, Attend and Tell [42] model which uses a single layer RNN network equipped with visual attention on the CNN feature map. It is a popular framework that forms the basis for subsequent state-of-the-art (SOTA) works on image captioning [43,44]. In this work, we employ LSTM and Gated Recurrent Unit (GRU) [45] as the RNN cell. Suppose {S 0 , ... , S T −1 } is a sequence of words in a sentence of length T , the model directly maximises the probability of the correct description given an image I using the following formulation: Detailed explanation for (a) is given in Sec. 5.6. In (b), "Weighted annealed loss" refers to λ s L s in Eq. 14 while "Loss" refers to L s before applying cosine annealing in Eq. 11. log p (S | I) = T t = 0 log p (S t | I, S 0 : t−1 , c t )(1) where t is the time step, p (S t | I, S 0 : t−1 , c t ) is the probability of generating a word given an image I, previous words S 0 : t−1 , and context vector c t . For a RNN network with r units, the hidden state of RNN is initialised with the image embedding vector as follows: h t=−1 = W I I embed , m t=−1 = 0 (2) where W I ∈ R r×h is a weight matrix and h is the size of I embed . The attention function used in this work is soft-attention introduced by [46] and used in [42], where a multilayer perceptron (MLP) with a single hidden layer is employed to calculate the attention weights on a particular feature map. The context vector c t is then concatenated with previous predicted word embedding to serve as input to the RNN. Finally, a probability distribution over the vocabulary is produced from the hidden state h t : p t = Softmax (E o h t ) (3) h t , m t = RNN (x t , h t−1 , m t−1 ) (4) x t = [E w S t−1 , c t ] (5) c t = SoftAtt (f )(6) where E w ∈ R q×v and E o ∈ R v×r are input and output embedding matrices respectively; p t is the probability distribution over the vocabulary V ; m t is the memory state; x t is the current input; S t−1 ∈ R q is the one-hot vector of previous word; c t ∈ R a is the context vector; f is the CNN feature map; and [ , ] is the concatenation operator. For GRU, all m t terms are ignored. Finally, the standard cross-entropy loss function for the captioning model θ is given by: L c = − T t log p t (S t ) + λ d θ 2 2 (7) End-to-end pruning Formulation. Similar to [1], TensorFlow framework is extended to prune network connections during training. Inspired by the concept of learnable Supermasks introduced by [33,47], our proposed method achieves model pruning via learnable gating variables that are trained in an end-to-end fashion. An overview of our method is illustrated in Fig. 1. For every weight variable matrix W to be pruned, we create a gating variable matrix G with the same shape as W . This gating matrix G functions as a masking mechanism that determines which of the parameter w in the weight matrix W participates in both forward-execution and back-propagation of the graph. To achieve this masking effect, we calculate the effective weight tensor as follows: W l = W l G b l (8) G b l = z(σ(G l ))(9) where W l , G l ∈ R D are the original weight and gating matrices from layer l with shape D; and superscript (·) b indicates binary sampled variables. is element-wise multiplication; σ(·) is a point-wise function that transforms continuous values into the interval (0, 1); and z(·) is a point-wise function that samples from a Bernoulli distribution. The composite function z(σ(·)) thus effectively transforms continuous values into binary values. Binary gating matrices G b can be obtained by treating σ(G) as Bernoulli random variables. While there are many possible choices for the σ function, we decided to use the logistic sigmoid function following [48] and [47]. To sample from the Bernoulli distribution, we can either perform a unbiased draw or a maximum-likelihood (ML) draw [33]. Unbiased draw is the usual sampling process where a gating value g ∈ (0, 1) is binarised to 1.0 with probability g and 0.0 otherwise, whereas ML draw involves thresholding the value g at 0.5. In this work, we denote unbiased and ML draw using the sampling functions z(·) = Bern(·) and z(·) = Round(·) respectively. We back-propagate through both sampling functions using the straight-through estimator [48] (i.e. δ z(g)/δg = 1). Prior to training, all the gating variables are initialised to the same constant value m while the weights and biases of the network are initialised using standard initialisation schemes (e.g. Xavier [49]). During training, both sampling functions Bern(·) and Round(·) are used in different ways. To obtain the effective weight tensor used to generate network activations, we utilised Bern(·) to inject some stochasticity that helps with training and to mitigate the bias arising from the constant value initialisation. Thus the effective weight calculation becomes: W l = W l Bern(σ(G l ))(10) To drive the sparsity level of gating variables φ to the user-specified level s target , we introduce a regularisation term L s . Consistent with the observations in the works of [1] and [32], we found that annealing the loss over the course of training produces the best result. Annealing is done using a cosine curve α defined in Eq. 12. To ensure determinism when calculating sparsity, we use Round(·) to sample from σ(G): L s = (1 − α) × s target − 1 − p nnz p total (11) α = 1 2 1 + cos nπ n max (12) p nnz = L l = 0 J j = 0 Round(σ(g j,l ))(13) where p nnz is the number of NNZ gating parameters; p total is the total number of gating parameters; n and n max is the current and final training step respectively; g j,l is the gating parameter at position j in the matrix G l from layer l; L is the number of layers; and J is the number of parameters in matrix G l . The progression of sparsity loss L s as well as the sparsity levels of various layers in the decoder are illustrated in Fig. 2b and 2a respectively. The final objective function used to train the captioning model θ with gating variables φ is: L ( I, S, s target ) = L c + λ s L s(14) Intuitively, the captioning loss term L c provides supervision for learning of the saliency of each parameter where important parameters are retained with higher probability while unimportant ones are dropped more frequently. On the other hand, the sparsity regularisation term L s pushes down the average value of the Bernoulli gating parameters so that most of them have a value less than 0.5 after sigmoid activation. The hyperparameter λ s determines the weightage of L s . If λ s is too low, the target sparsity level might not be attained; whereas high values might slightly affect performance (see Sec. 5.1). Training and Inference. The training process of the captioning model is divided into two distinct stages: decoder training and end-to-end fine-tuning. During the decoder training stage, we freeze the CNN parameters and only learn decoder and gating parameters by optimising the loss given in Eq. 14. For the fine-tuning stage, we restore all the parameters θ and φ from the last checkpoint at the end of decoder training and optimise the entire model including the CNN. During this stage, Bern(·) is still used but all φ parameters are frozen. After training is completed, all the weight matrices W 1:L is transformed into sparse matrices by sampling from G 1:L using Round(·), after which G can be discarded. In other words, the final weights W f are calculated as: W f l = W l Round(σ(G l ))(15) Experiment Setup Unless stated otherwise, all experiments have the following configurations. We did not perform extensive hyperparameter search due to limited resources. Hyperparameters Models are implemented using TensorFlow r1.9. The image encoder used in this work is GoogLeNet (InceptionV1) with batch normalisation [50,51] pre-trained on ImageNet [52]. The input images are resized to 256 × 256, then randomly flipped and cropped to 224 × 224 before being fed to the CNN. The attention function SoftAtt(·) operates on the Mixed-4f map f ∈ R 196×832 . The size of context vector c t and attention MLP is set to a = 512. A single layer LSTM or GRU network with hidden state size of r = 512 is used. The word size is set to q = 256 dimensions. The optimiser used for decoder training is Adam [53], with batch size of 32. The initial learning rate (LR) is set to 1 × 10 −2 , and annealed using the cosine curve α defined in Eq. 12, ending at 1 × 10 −5 . All models are trained for 30 epochs. Weight decay rate is set to λ d = 1 × 10 −5 . For fine-tuning, a smaller initial LR of 1 × 10 −3 is used and the entire model is trained for 10 epochs. Captioning model parameters are initialised randomly using Xavier uniform initialisation [49]. The input and output dropout rates for dense RNN are both set to 0.35, while the attention map dropout rate is set to 0.1. Following [4,2], a lower dropout rate is used for sparse networks where RNN and attention dropout rates are set to 0.11 and 0.03 respectively. This is done to account for the reduced capacity of the sparse models. For fair comparison, we apply pruning to all weights of the captioning model for all of the pruning schemes. For our proposed method, we train the gating variables φ with a higher constant LR of 100 without annealing, which is consistent with [47]. We found that LR lower than 100 causes φ to train too slowly. We set λ s according to this heuristic: λ s = max(5, 0.5/(1 − s target )). All gating parameters φ are initialised to a constant m = 5.0, see Sec. 5.1 for other values. For gradual pruning [1], pruning is started after first epoch is completed and ended at the end of epoch 15, following the general heuristics outlined in [2]. Pruning frequency is set to 1000. We use the standard scheme where each layer is pruned to the same pruning ratio at every step. For hard pruning [25], pruning is applied to the dense baseline model after training is completed. Retraining is then performed for 10 epochs. LR and annealing schedule are the same as used for dense baseline. For inference, beam search is used in order to better approximate S = arg max S p(S | I). Beam size is set to b = 3 with no length normalisation. We evaluate the last checkpoint upon completion of training for all the experiments. We denote compression ratio as CR. Dataset The experiments are performed on the popular MS-COCO dataset [54]. It is a public English captioning dataset which contains 123, 287 images and each image is given at least 5 captions by different Amazon Mechanical Turk (AMT) workers. As there is no official test split with annotations available, the publicly available split 4 in the work of [55] is used in this work. The split assigns 5, 000 images for validation, another 5, 000 for testing and the rest for training. We reuse the publicly available tokenised captions. Words that occur less than 5 times are filtered out and sentences longer than 20 words are truncated. All the scores are obtained using the publicly available MS-COCO evaluation toolkit 5 , which computes BLEU [56], METEOR [57], ROUGE-L [58], CIDEr [59] and SPICE [60]. For sake of brevity, we label BLEU-1 to BLEU-4 as B-1 to B-4, and METEOR, ROUGE-L, CIDEr, SPICE as M, R, C, S respectively. Table 1 shows the effect of various gating initialisation values. From the table, we can see that the best overall performance is achieved when m is set to 5. Starting the gating parameters at a value of 5 allows all the captioning parameters θ to be retained with high probability at the early stages of training, allowing better convergence. This observation is also consistent with the works of [1] and [32], where the authors found that gradual pruning and late resetting can lead to better model performance. Thus, we recommend setting m = 5.0. Table 2 shows the effect of sparsity regularisation weightage λ s . This is the important hyperparameter that could affect the final sparsity level at convergence. From the results, we can see that low values lead to insufficient sparsity, and higher sparsity target s target requires higher λ s . For image captioning on MS-COCO, we empirically determined that the heuristic given in Sec. 4.1 works sufficiently well for sparsity levels from 80% to 97.5% (see Table 3 and 4. Experiments and Discussion Ablation study Comparison with RNN pruning methods In this section, we provide extensive comparisons of our proposed method with the dense baselines as well as competing methods at multiple sparsity levels. All the models have been verified to have achieved the targeted sparsity levels. From Table 3 and 4, we can clearly see that our proposed end-to-end pruning provides good performance when compared to the dense baselines. This is true even at high pruning ratios of 90% and 95%. The relative drops in BLEU-4 and CIDEr scores are only −1.0% to −2.9% and −1.3% to −2.9% while having 10 − 20× fewer NNZ parameters. This is in contrast with competing methods whose performance drops are double or even triple compared to ours, especially for LSTM. The performance advantage provided by end-to-end pruning is even more apparent at the high pruning ratio of 97.5%, offering a big 40× reduction in NNZ parameters. Even though we suffered relative degradations of −4.8% to −6.4% in BLEU-4 and CIDEr scores compared to baselines, our performance is still significantly better than the next-closest method which is gradual pruning. On the other hand, the performance achieved by our 80% pruned models are extremely close to that of baselines. Our sparse LSTM model even very slightly outperforms the baseline on some metrics, although we note that the standard deviation for CIDEr score across training runs is around 0.3 to 0.9. Among the competing methods, we can see that gradual pruning usually outperforms hard pruning, especially at high sparsities of 95% and 97.5%. That being said, we can see that class- Table 4: Comparison with dense GRU baseline and competing methods. Bold text indicates best overall performance. "Gradual" and "Hard" denote methods proposed in [1] and [25]. blind hard pruning is able to produce good results at moderate pruning rates of 80% and 90%, even outperforming gradual pruning. This is especially true for the GRU captioning model where it outperforms all other methods briefly at 90% sparsity, however we note that its performance on LSTM is generally lower. In contrast, our proposed approach achieves good performance on both LSTM and GRU models. All in all, these results showcase the strength of our proposed method. Across pruning ratios from 80% to 97.5%, our approach consistently maintain relatively good performance when compared to the dense baselines while outperforming magnitude-based gradual and hard pruning methods in most cases. Effect of fine-tuning In this section, we investigate the potential impact of fine-tuning the entire captioning model in an end-to-end manner. From Table 5, we can see that model fine-tuning has a performancerecovering effect on the sparse models. This phenomenon is especially apparent on very sparse models with sparsity at 97.5%. On both LSTM and GRU models, the drops in performance suffered due to pruning have mostly reduced except for LSTM at 80% sparsity. Notably, all the pruned models have remarkably similar performance from 80% sparsity up until 97.5%. The score gap between dense and sparse GRU models are exceedingly small, ranging from +1.2% to −1.9% for both BLEU-4 and CIDEr. For LSTM models, even though the score gap is slightly larger at −0.9% to −2.5% on both BLEU-4 and CIDEr, it is still considerably smaller than without CNN fine-tuning (Table 3). These results suggest that the Inception-V1 CNN pre-trained on ImageNet is not optimised to provide useful features for sparse decoders. As such, end-to-end fine-tuning together with sparse decoder allows features extracted by the CNN to be adapted where useful semantic information can be propagated through surviving connections in the decoder. We also provided compression and performance comparison with the closely related work of [30] who utilised GP [31] method to produce sparse H-LSTM for image captioning. For fairness, we also provide scores obtained at CR of 40× using beam size of 2 instead of 3. From the table, we can see that at overall CR of 20× to 40×, our sparse models are able to outperform H-LSTM with lower NNZ parameters. This indicates that the effectiveness of our one-shot approach is at least comparable to the iterative process of grow-and-prune. Large-sparse versus small-dense In this section, we show that a large sparse LSTM image captioning model produced via endto-end pruning is able to outperform a smaller dense LSTM trained normally. The small-dense model denoted as LSTM-S has a word embedding size of q = 64 dimensions, LSTM size of r = 128 units and finally attention MLP size of a = 96 units. The results are given in Table 6. From the results, we can see that the small-dense model with 5× fewer parameters performs considerably worse than all the large-sparse models LSTM-M across the board. Notably, we can see that the large-sparse LSTM-M model with 40× fewer NNZ parameters still managed to outperform LSTM-S with a considerable margin. At equal NNZ parameters, the large-sparse model comfortably outperforms the small-dense model. This showcases further the strength of model pruning and solidifies the observations made in works on RNN pruning [2, 1]. Caption uniqueness and length In this section, we explore the potential effects of our proposed end-to-end pruning on the uniqueness and length of the generated captions. As pruning reduces the complexity and capacity of the decoder considerably, we wish to see if the sparse models show any signs of training data memorisation and hence potentially overfitting. In such cases, uniqueness of the generated captions would decrease as the decoder learns to simply repeat captions available in the training set. A generated caption is considered to be unique if it is not found in the training set. From Table 7, we can see that despite the heavy reductions in NNZ parameters, the uniqueness of generated captions have not decreased. On the contrary, more unseen captions are being generated at higher levels of sparsity and compression. On the other hand, we can see that the average lengths of generated captions peaked at 80% sparsity in most cases and then decrease slightly as sparsity increase. That being said, the reductions in caption length are minimal (+0.5% to −2.3%) considering the substantial decoder compression rates of up to 40×. Together with the good performance shown in Table 3 and 4, these results indicate that our approach is able to maintain both the variability of generated captions and their quality as measured by the metric scores. Layer-wise sparsity comparison Finally, we visualise the pruning ratio of each decoder layers when pruned using the different methods listed in Sec. 5.2. Among the approaches, both gradual and class-uniform pruning produces the same sparsity level across all the layers. To better showcase the differences in layer-wise pruning ratios, we decided to visualise two opposite ends in which the first has a relatively moderate sparsity of 80% while the other has a high sparsity of 97.5%. In both Fig. 3a and 3b, we denote the decoder layers as follows: "RNN initial state" refers to W I in Eq. 2; "LSTM kernel" is the concatenation of all gate kernels in LSTM (i.e. input, output, forget, cell); "Key", "Value" and "Query" layers refer to projection layers in the attention module (see [61] for details); "Attention MLP" is the second layer of the 2-layer attention MLP; and finally "Word" and "Logits" refer to the word embedding matrix E w in Eq. 5 and E o in Eq. 3 respectively. From the figures, we can see that our proposed pruning method consistently prune "attention MLP" layer the least. This is followed by "LSTM kernel" and "Value" layers where they generally receive lesser pruning compared to others. On the flip side, "Key" and "Query" layers were pruned most heavily at levels often exceeding the targeted pruning rates. Finally, "Word embedding" consistently receives more pruning than "Logits layer". This may indicate that there exists substantial information redundancy in the word embeddings matrix as noted in works such as [37,40,62]. Conclusion and Future Work In this work, we have investigated the effectiveness of model weight pruning on the task of image captioning with visual attention. In particular, we proposed an end-to-end pruning method that performs considerably better than competing methods at maintaining captioning performance while maximising compression rate. Our single-shot approach is simple and fast to use, provides good performance, and its sparsity level is easy to tune. Moreover, we have demonstrated by pruning decoder weights during training, we can find sparse models that performs better than dense counterparts while significantly reducing model size. Our results pave the way towards deployment on mobile and embedded devices due to their small size and reduced memory requirements. In the future, we wish to investigate the generalisation capability of end-to-end pruning when applied on Transformer models [61]. We would also
4,912
1908.10797
2971306187
Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded devices infeasible. Driven by this problem, many works have proposed a number of pruning methods to reduce the sizes of the RNN model. In this work, we propose an end-to-end pruning method for image captioning models equipped with visual attention. Our proposed method is able to achieve sparsity levels up to 97.5 without significant performance loss relative to the baseline (around 1 loss at 40x compression of GRU model). Our method is also simple to use and tune, facilitating faster development times for neural network practitioners. We perform extensive experiments on the popular MS-COCO dataset in order to empirically validate the efficacy of our proposed method.
Early works like @cite_48 and @cite_2 explored pruning by computing the Hessian of the loss with respect to the parameters in order to assess the saliency of each parameter. Other works involving saliency computation include @cite_0 and @cite_33 where sensitivity of the loss with respect to neurons and weights are used respectively. On the other hand, works such as @cite_13 @cite_55 directly induce network sparsity by incorporating sparsity-enforcing penalty terms into the loss function.
{ "abstract": [ "The sensitivity of the global error (cost) function to the inclusion exclusion of each synapse in the artificial neural network is estimated. Introduced are shadow arrays which keep track of the incremental changes to the synaptic weights during a single pass of back-propagating learning. The synapses are then ordered by decreasing sensitivity numbers so that the network can be efficiently pruned by discarding the last items of the sorted list. Unlike previous approaches, this simple procedure does not require a modification of the cost function, does not interfere with the learning process, and demands a negligible computational overhead. >", "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "Abstract It is widely known that, despite its popularity, back propagation learning suffers from various difficulties. There have been many studies aiming at the solution of these. Among them there are a class of learning algorithms, which I call structural learning, aiming at small-sized networks requiring less computational cost. Still more important is the discovery of regularities in or the extraction of rules from training data. For this purpose I propose a learning method called structural learning with forgetting. It is applied to various examples: the discovery of Boolean functions, classification of irises, discovery of recurrent networks, prediction of time series and rule extraction from mushroom data. These results demonstrate the effectiveness of structural learning with forgetting. A comparative study on various structural learning methods also supports its effectiveness.", "This paper proposes a means of using the knowledge in a network to determine the functionality or relevance of individual units, both for the purpose of understanding the network's behavior and improving its performance. The basic idea is to iteratively train the network to a certain performance criterion, compute a measure of relevance that identifies which input or hidden units are most critical to performance, and automatically trim the least relevant units. This skeletonization technique can be used to simplify networks by eliminating units that convey redundant information; to improve learning performance by first learning with spare hidden units and then trimming the unnecessary ones away, thereby constraining generalization; and to understand the behavior of networks in terms of minimal \"rules.\"", "The use of information from all second-order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and, in some cases, enable rule extraction, is investigated. The method, Optimal Brain Surgeon (OBS), is significantly better than magnitude-based methods and Optimal Brain Damage, which often remove the wrong weights. OBS, permits pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H sup -1 from training data and structural information of the set. OBS deletes the correct weights from a trained XOR network in every case. >", "This paper presents a variation of the back-propagation algorithm that makes optimal use of a network hidden units by decrasing an \"energy\" term written as a function of the squared activations of these hidden units. The algorithm can automatically find optimal or nearly optimal architectures necessary to solve known Boolean functions, facilitate the interpretation of the activation of the remaining hidden units and automatically estimate the complexity of architectures appropriate for phonetic labeling problems. The general principle of the algorithm can also be adapted to different tasks: for example, it can be used to eliminate the [0, 0] local minimum of the [-1. +1] logistic activation function while preserving a much faster convergence and forcing binary activations over the set of hidden units." ], "cite_N": [ "@cite_33", "@cite_48", "@cite_55", "@cite_0", "@cite_2", "@cite_13" ], "mid": [ "2097533491", "2114766824", "2028069051", "2134273960", "2156150815", "2096764219" ] }
Image Captioning with Sparse Recurrent Neural Network
Automatically generating a caption that describes an image, a problem known as image captioning, is a challenging problem where computer vision (CV) meets natural language processing (NLP). A well performing model not only has to identify the objects in the image, but also capture the semantic relationship between them, general context and the activities that they are involved in. Lastly, the model has to map the visual representation into a fully-formed sentence in a natural language such as English. A good image captioning model can have many useful applications, which include helping the visually impaired to better understand the web contents, providing descriptive annotations of website contents, and enabling better context-based image retrieval by tagging images with accurate natural language descriptions. Driven by user privacy concerns and the quest for lower user-perceived latency, deployment on edge devices away from remote servers is required. As edge devices usually have limited battery capacity and thermal limits, this presents a few key challenges in the form of storage size, power consumption and computational demands [1]. For models incorporating RNNs, on-device inference is often memory bandwidth-bound. As RNN parameters are fixed at every time step, parameter reading forms the bulk of the work [2,1]. As such, RNN pruning offers the opportunity to not only reduce the amount of memory access but also fitting the model in on-chip SRAM cache rather than off-chip DRAM memory, both of which dramatically reduce power consumption [3,4]. Similarly, sparsity patterns for pruned RNNs are fixed across time steps. This offers the potential to factorise scheduling and load balancing operations outside of the loop and enable reuse [2]. Lastly, pruning allows larger RNNs to be stored in memory and trained [2,5] In this work, we propose a one-shot end-to-end pruning method to produce very sparse image captioning decoders (up to 97.5% sparsity) while maintaining good performance relative to the dense baseline model as well as competing methods. We detail our contributions in the following section (Sec. 2.2). Model pruning Modern neural networks that provide good performance tend to be large and overparameterised, fuelled by observations that larger [6,7,8] networks tend to be easier to train. This in turn drives numerous efforts to reduce model size using techniques such as weight pruning and quantisation [9,10,11]. Early works like [12] and [13] explored pruning by computing the Hessian of the loss with respect to the parameters in order to assess the saliency of each parameter. Other works involving saliency computation include [14] and [15] where sensitivity of the loss with respect to neurons and weights are used respectively. On the other hand, works such as [16,17] directly induce network sparsity by incorporating sparsity-enforcing penalty terms into the loss function. Most of the recent works in network pruning focused on vision-centric classification tasks using Convolutional Neural Networks (CNNs) and occasionally RNNs. Techniques proposed include magnitude-based pruning [3,4,18] and variational pruning [19,20,21]. Among these, magnitude-based weight pruning have become popular due to their effectiveness and simplicity. Most notably, [3] employed a combination of pruning, quantization and Huffman encoding resulting in massive reductions in model size without affecting accuracy. While unstructured sparse connectivity provides reduction in storage size, it requires sparse General Matrix-Matrix Multiply (GEMM) libraries such as cuSPARSE and SPBLAS in order to achieve accelerated inference. Motivated by existing hardware architectures optimised for dense linear algebra, many works propose techniques to prune and induce sparsity in a structured way in which entire filters are removed [22,23,24]. On the other hand, works extending connection pruning to RNN networks are considerably fewer [25,2,1,26]. See et al. [25] first explored magnitude-based pruning applied to deep multi-layer neural machine translation (NMT) model with Long-Short Term Memory (LSTM) [27]. In their work, three pruning schemes are evaluated which include class-blind, class-uniform and class-distribution. Class-blind pruning was found to produce the best result compared to the other two schemes. Narang et al. [2] introduced a gradual magnitude-based pruning scheme for speech recognition RNNs whereby all the weights in a layer less than some chosen threshold are pruned. Gradual pruning is performed in parallel with network training while pruning rate is controlled by a slope function with two distinct phases. This is extended by Zhu and Gupta [1] who simplified the gradual pruning scheme with reduced hyperparameters. Our contribution Our proposed end-to-end pruning method possesses three main qualities: i) Simple and fast. Our approach enables easy pruning of the RNN decoder equipped with visual attention, whereby the best number of weights to prune in each layer is automatically determined. Compared to works such as [1,2], our approach is simpler with 1 to 2 hyperparameters versus 3 to 4 hyperparameters. Our method also does not rely on reinforcement learning techniques such as in the work of [28]. Moreover, our method applies pruning to all the weights in the RNN decoder and does not require special considerations to exclude pruning from certain weight classes. Lastly our method completes pruning in a single-shot process rather than requiring iterative train-and-prune process as in [29,30,31,32]. ii) Good performance-to-sparsity ratio enabling very high sparsity. Our approach achieves good performance across sparsity levels from 80% up until 97.5% (40× reduction in Number of Non-zeros (NNZ) parameters). This is in contrast with competing methods [1,25] where there is a significant performance drop-off starting at sparsity level of 90%. iii) Easily tunable sparsity level. Our approach provides a way for neural network practitioners to easily control the level of sparsity and compression desired. This allows for model solutions that are tailored for each particular scenario. In contrast, while the closely related works of [33,34] also provide good performance with the incorporation of gating variables, there is not a straightforward way of controlling the final sparsity level. In their works, regularisers such as bi-modal, l 2 , l 1 and l 0 regulariser are used to encourage network sparsity. Their work also only focuses on image classification using CNNs. While there are other works on compressing RNNs, most of the methods proposed either comes with structural constraints or are complementary to model pruning in principle. Examples include using low-rank matrix factorisations [35,36], product quantisation on embeddings [37], factorising word predictions into multiple time steps [38,39,40], and grouping RNNs [41]. Lastly, another closely related work by [30] also incorporated model pruning into image captioning. However we note three notable differences: 1) their work is focused on proposing a new LSTM cell structure named the H-LSTM; 2) their work utilises the grow-and-prune (GP) method [31] which necessitates compute and time expensive iterative pruning; and 3) the compression figures stated are calculated based on the size of the LSTM cells instead of the entire decoder. Proposed Method Our proposed method involves incorporating learnable gating parameters into regular image captioning framework. We denote weight, bias and gating matrices as W , B and G respectively. For a model with L layers, the captioning and gating parameters are denoted as θ and φ such that θ = {W 1:L , B 1:L } and φ = {G 1:L }. As there are substantial existing works focusing on pruning CNNs, we focus our efforts on pruning generative RNNs. As such, we only prune the RNN decoder. All model size calculations in this work include only the decoder (including attention module) while the encoder (i.e. CNN) is excluded. Image captioning with visual attention Our image captioning framework of interest is a simplified variant of the Show, Attend and Tell [42] model which uses a single layer RNN network equipped with visual attention on the CNN feature map. It is a popular framework that forms the basis for subsequent state-of-the-art (SOTA) works on image captioning [43,44]. In this work, we employ LSTM and Gated Recurrent Unit (GRU) [45] as the RNN cell. Suppose {S 0 , ... , S T −1 } is a sequence of words in a sentence of length T , the model directly maximises the probability of the correct description given an image I using the following formulation: Detailed explanation for (a) is given in Sec. 5.6. In (b), "Weighted annealed loss" refers to λ s L s in Eq. 14 while "Loss" refers to L s before applying cosine annealing in Eq. 11. log p (S | I) = T t = 0 log p (S t | I, S 0 : t−1 , c t )(1) where t is the time step, p (S t | I, S 0 : t−1 , c t ) is the probability of generating a word given an image I, previous words S 0 : t−1 , and context vector c t . For a RNN network with r units, the hidden state of RNN is initialised with the image embedding vector as follows: h t=−1 = W I I embed , m t=−1 = 0 (2) where W I ∈ R r×h is a weight matrix and h is the size of I embed . The attention function used in this work is soft-attention introduced by [46] and used in [42], where a multilayer perceptron (MLP) with a single hidden layer is employed to calculate the attention weights on a particular feature map. The context vector c t is then concatenated with previous predicted word embedding to serve as input to the RNN. Finally, a probability distribution over the vocabulary is produced from the hidden state h t : p t = Softmax (E o h t ) (3) h t , m t = RNN (x t , h t−1 , m t−1 ) (4) x t = [E w S t−1 , c t ] (5) c t = SoftAtt (f )(6) where E w ∈ R q×v and E o ∈ R v×r are input and output embedding matrices respectively; p t is the probability distribution over the vocabulary V ; m t is the memory state; x t is the current input; S t−1 ∈ R q is the one-hot vector of previous word; c t ∈ R a is the context vector; f is the CNN feature map; and [ , ] is the concatenation operator. For GRU, all m t terms are ignored. Finally, the standard cross-entropy loss function for the captioning model θ is given by: L c = − T t log p t (S t ) + λ d θ 2 2 (7) End-to-end pruning Formulation. Similar to [1], TensorFlow framework is extended to prune network connections during training. Inspired by the concept of learnable Supermasks introduced by [33,47], our proposed method achieves model pruning via learnable gating variables that are trained in an end-to-end fashion. An overview of our method is illustrated in Fig. 1. For every weight variable matrix W to be pruned, we create a gating variable matrix G with the same shape as W . This gating matrix G functions as a masking mechanism that determines which of the parameter w in the weight matrix W participates in both forward-execution and back-propagation of the graph. To achieve this masking effect, we calculate the effective weight tensor as follows: W l = W l G b l (8) G b l = z(σ(G l ))(9) where W l , G l ∈ R D are the original weight and gating matrices from layer l with shape D; and superscript (·) b indicates binary sampled variables. is element-wise multiplication; σ(·) is a point-wise function that transforms continuous values into the interval (0, 1); and z(·) is a point-wise function that samples from a Bernoulli distribution. The composite function z(σ(·)) thus effectively transforms continuous values into binary values. Binary gating matrices G b can be obtained by treating σ(G) as Bernoulli random variables. While there are many possible choices for the σ function, we decided to use the logistic sigmoid function following [48] and [47]. To sample from the Bernoulli distribution, we can either perform a unbiased draw or a maximum-likelihood (ML) draw [33]. Unbiased draw is the usual sampling process where a gating value g ∈ (0, 1) is binarised to 1.0 with probability g and 0.0 otherwise, whereas ML draw involves thresholding the value g at 0.5. In this work, we denote unbiased and ML draw using the sampling functions z(·) = Bern(·) and z(·) = Round(·) respectively. We back-propagate through both sampling functions using the straight-through estimator [48] (i.e. δ z(g)/δg = 1). Prior to training, all the gating variables are initialised to the same constant value m while the weights and biases of the network are initialised using standard initialisation schemes (e.g. Xavier [49]). During training, both sampling functions Bern(·) and Round(·) are used in different ways. To obtain the effective weight tensor used to generate network activations, we utilised Bern(·) to inject some stochasticity that helps with training and to mitigate the bias arising from the constant value initialisation. Thus the effective weight calculation becomes: W l = W l Bern(σ(G l ))(10) To drive the sparsity level of gating variables φ to the user-specified level s target , we introduce a regularisation term L s . Consistent with the observations in the works of [1] and [32], we found that annealing the loss over the course of training produces the best result. Annealing is done using a cosine curve α defined in Eq. 12. To ensure determinism when calculating sparsity, we use Round(·) to sample from σ(G): L s = (1 − α) × s target − 1 − p nnz p total (11) α = 1 2 1 + cos nπ n max (12) p nnz = L l = 0 J j = 0 Round(σ(g j,l ))(13) where p nnz is the number of NNZ gating parameters; p total is the total number of gating parameters; n and n max is the current and final training step respectively; g j,l is the gating parameter at position j in the matrix G l from layer l; L is the number of layers; and J is the number of parameters in matrix G l . The progression of sparsity loss L s as well as the sparsity levels of various layers in the decoder are illustrated in Fig. 2b and 2a respectively. The final objective function used to train the captioning model θ with gating variables φ is: L ( I, S, s target ) = L c + λ s L s(14) Intuitively, the captioning loss term L c provides supervision for learning of the saliency of each parameter where important parameters are retained with higher probability while unimportant ones are dropped more frequently. On the other hand, the sparsity regularisation term L s pushes down the average value of the Bernoulli gating parameters so that most of them have a value less than 0.5 after sigmoid activation. The hyperparameter λ s determines the weightage of L s . If λ s is too low, the target sparsity level might not be attained; whereas high values might slightly affect performance (see Sec. 5.1). Training and Inference. The training process of the captioning model is divided into two distinct stages: decoder training and end-to-end fine-tuning. During the decoder training stage, we freeze the CNN parameters and only learn decoder and gating parameters by optimising the loss given in Eq. 14. For the fine-tuning stage, we restore all the parameters θ and φ from the last checkpoint at the end of decoder training and optimise the entire model including the CNN. During this stage, Bern(·) is still used but all φ parameters are frozen. After training is completed, all the weight matrices W 1:L is transformed into sparse matrices by sampling from G 1:L using Round(·), after which G can be discarded. In other words, the final weights W f are calculated as: W f l = W l Round(σ(G l ))(15) Experiment Setup Unless stated otherwise, all experiments have the following configurations. We did not perform extensive hyperparameter search due to limited resources. Hyperparameters Models are implemented using TensorFlow r1.9. The image encoder used in this work is GoogLeNet (InceptionV1) with batch normalisation [50,51] pre-trained on ImageNet [52]. The input images are resized to 256 × 256, then randomly flipped and cropped to 224 × 224 before being fed to the CNN. The attention function SoftAtt(·) operates on the Mixed-4f map f ∈ R 196×832 . The size of context vector c t and attention MLP is set to a = 512. A single layer LSTM or GRU network with hidden state size of r = 512 is used. The word size is set to q = 256 dimensions. The optimiser used for decoder training is Adam [53], with batch size of 32. The initial learning rate (LR) is set to 1 × 10 −2 , and annealed using the cosine curve α defined in Eq. 12, ending at 1 × 10 −5 . All models are trained for 30 epochs. Weight decay rate is set to λ d = 1 × 10 −5 . For fine-tuning, a smaller initial LR of 1 × 10 −3 is used and the entire model is trained for 10 epochs. Captioning model parameters are initialised randomly using Xavier uniform initialisation [49]. The input and output dropout rates for dense RNN are both set to 0.35, while the attention map dropout rate is set to 0.1. Following [4,2], a lower dropout rate is used for sparse networks where RNN and attention dropout rates are set to 0.11 and 0.03 respectively. This is done to account for the reduced capacity of the sparse models. For fair comparison, we apply pruning to all weights of the captioning model for all of the pruning schemes. For our proposed method, we train the gating variables φ with a higher constant LR of 100 without annealing, which is consistent with [47]. We found that LR lower than 100 causes φ to train too slowly. We set λ s according to this heuristic: λ s = max(5, 0.5/(1 − s target )). All gating parameters φ are initialised to a constant m = 5.0, see Sec. 5.1 for other values. For gradual pruning [1], pruning is started after first epoch is completed and ended at the end of epoch 15, following the general heuristics outlined in [2]. Pruning frequency is set to 1000. We use the standard scheme where each layer is pruned to the same pruning ratio at every step. For hard pruning [25], pruning is applied to the dense baseline model after training is completed. Retraining is then performed for 10 epochs. LR and annealing schedule are the same as used for dense baseline. For inference, beam search is used in order to better approximate S = arg max S p(S | I). Beam size is set to b = 3 with no length normalisation. We evaluate the last checkpoint upon completion of training for all the experiments. We denote compression ratio as CR. Dataset The experiments are performed on the popular MS-COCO dataset [54]. It is a public English captioning dataset which contains 123, 287 images and each image is given at least 5 captions by different Amazon Mechanical Turk (AMT) workers. As there is no official test split with annotations available, the publicly available split 4 in the work of [55] is used in this work. The split assigns 5, 000 images for validation, another 5, 000 for testing and the rest for training. We reuse the publicly available tokenised captions. Words that occur less than 5 times are filtered out and sentences longer than 20 words are truncated. All the scores are obtained using the publicly available MS-COCO evaluation toolkit 5 , which computes BLEU [56], METEOR [57], ROUGE-L [58], CIDEr [59] and SPICE [60]. For sake of brevity, we label BLEU-1 to BLEU-4 as B-1 to B-4, and METEOR, ROUGE-L, CIDEr, SPICE as M, R, C, S respectively. Table 1 shows the effect of various gating initialisation values. From the table, we can see that the best overall performance is achieved when m is set to 5. Starting the gating parameters at a value of 5 allows all the captioning parameters θ to be retained with high probability at the early stages of training, allowing better convergence. This observation is also consistent with the works of [1] and [32], where the authors found that gradual pruning and late resetting can lead to better model performance. Thus, we recommend setting m = 5.0. Table 2 shows the effect of sparsity regularisation weightage λ s . This is the important hyperparameter that could affect the final sparsity level at convergence. From the results, we can see that low values lead to insufficient sparsity, and higher sparsity target s target requires higher λ s . For image captioning on MS-COCO, we empirically determined that the heuristic given in Sec. 4.1 works sufficiently well for sparsity levels from 80% to 97.5% (see Table 3 and 4. Experiments and Discussion Ablation study Comparison with RNN pruning methods In this section, we provide extensive comparisons of our proposed method with the dense baselines as well as competing methods at multiple sparsity levels. All the models have been verified to have achieved the targeted sparsity levels. From Table 3 and 4, we can clearly see that our proposed end-to-end pruning provides good performance when compared to the dense baselines. This is true even at high pruning ratios of 90% and 95%. The relative drops in BLEU-4 and CIDEr scores are only −1.0% to −2.9% and −1.3% to −2.9% while having 10 − 20× fewer NNZ parameters. This is in contrast with competing methods whose performance drops are double or even triple compared to ours, especially for LSTM. The performance advantage provided by end-to-end pruning is even more apparent at the high pruning ratio of 97.5%, offering a big 40× reduction in NNZ parameters. Even though we suffered relative degradations of −4.8% to −6.4% in BLEU-4 and CIDEr scores compared to baselines, our performance is still significantly better than the next-closest method which is gradual pruning. On the other hand, the performance achieved by our 80% pruned models are extremely close to that of baselines. Our sparse LSTM model even very slightly outperforms the baseline on some metrics, although we note that the standard deviation for CIDEr score across training runs is around 0.3 to 0.9. Among the competing methods, we can see that gradual pruning usually outperforms hard pruning, especially at high sparsities of 95% and 97.5%. That being said, we can see that class- Table 4: Comparison with dense GRU baseline and competing methods. Bold text indicates best overall performance. "Gradual" and "Hard" denote methods proposed in [1] and [25]. blind hard pruning is able to produce good results at moderate pruning rates of 80% and 90%, even outperforming gradual pruning. This is especially true for the GRU captioning model where it outperforms all other methods briefly at 90% sparsity, however we note that its performance on LSTM is generally lower. In contrast, our proposed approach achieves good performance on both LSTM and GRU models. All in all, these results showcase the strength of our proposed method. Across pruning ratios from 80% to 97.5%, our approach consistently maintain relatively good performance when compared to the dense baselines while outperforming magnitude-based gradual and hard pruning methods in most cases. Effect of fine-tuning In this section, we investigate the potential impact of fine-tuning the entire captioning model in an end-to-end manner. From Table 5, we can see that model fine-tuning has a performancerecovering effect on the sparse models. This phenomenon is especially apparent on very sparse models with sparsity at 97.5%. On both LSTM and GRU models, the drops in performance suffered due to pruning have mostly reduced except for LSTM at 80% sparsity. Notably, all the pruned models have remarkably similar performance from 80% sparsity up until 97.5%. The score gap between dense and sparse GRU models are exceedingly small, ranging from +1.2% to −1.9% for both BLEU-4 and CIDEr. For LSTM models, even though the score gap is slightly larger at −0.9% to −2.5% on both BLEU-4 and CIDEr, it is still considerably smaller than without CNN fine-tuning (Table 3). These results suggest that the Inception-V1 CNN pre-trained on ImageNet is not optimised to provide useful features for sparse decoders. As such, end-to-end fine-tuning together with sparse decoder allows features extracted by the CNN to be adapted where useful semantic information can be propagated through surviving connections in the decoder. We also provided compression and performance comparison with the closely related work of [30] who utilised GP [31] method to produce sparse H-LSTM for image captioning. For fairness, we also provide scores obtained at CR of 40× using beam size of 2 instead of 3. From the table, we can see that at overall CR of 20× to 40×, our sparse models are able to outperform H-LSTM with lower NNZ parameters. This indicates that the effectiveness of our one-shot approach is at least comparable to the iterative process of grow-and-prune. Large-sparse versus small-dense In this section, we show that a large sparse LSTM image captioning model produced via endto-end pruning is able to outperform a smaller dense LSTM trained normally. The small-dense model denoted as LSTM-S has a word embedding size of q = 64 dimensions, LSTM size of r = 128 units and finally attention MLP size of a = 96 units. The results are given in Table 6. From the results, we can see that the small-dense model with 5× fewer parameters performs considerably worse than all the large-sparse models LSTM-M across the board. Notably, we can see that the large-sparse LSTM-M model with 40× fewer NNZ parameters still managed to outperform LSTM-S with a considerable margin. At equal NNZ parameters, the large-sparse model comfortably outperforms the small-dense model. This showcases further the strength of model pruning and solidifies the observations made in works on RNN pruning [2, 1]. Caption uniqueness and length In this section, we explore the potential effects of our proposed end-to-end pruning on the uniqueness and length of the generated captions. As pruning reduces the complexity and capacity of the decoder considerably, we wish to see if the sparse models show any signs of training data memorisation and hence potentially overfitting. In such cases, uniqueness of the generated captions would decrease as the decoder learns to simply repeat captions available in the training set. A generated caption is considered to be unique if it is not found in the training set. From Table 7, we can see that despite the heavy reductions in NNZ parameters, the uniqueness of generated captions have not decreased. On the contrary, more unseen captions are being generated at higher levels of sparsity and compression. On the other hand, we can see that the average lengths of generated captions peaked at 80% sparsity in most cases and then decrease slightly as sparsity increase. That being said, the reductions in caption length are minimal (+0.5% to −2.3%) considering the substantial decoder compression rates of up to 40×. Together with the good performance shown in Table 3 and 4, these results indicate that our approach is able to maintain both the variability of generated captions and their quality as measured by the metric scores. Layer-wise sparsity comparison Finally, we visualise the pruning ratio of each decoder layers when pruned using the different methods listed in Sec. 5.2. Among the approaches, both gradual and class-uniform pruning produces the same sparsity level across all the layers. To better showcase the differences in layer-wise pruning ratios, we decided to visualise two opposite ends in which the first has a relatively moderate sparsity of 80% while the other has a high sparsity of 97.5%. In both Fig. 3a and 3b, we denote the decoder layers as follows: "RNN initial state" refers to W I in Eq. 2; "LSTM kernel" is the concatenation of all gate kernels in LSTM (i.e. input, output, forget, cell); "Key", "Value" and "Query" layers refer to projection layers in the attention module (see [61] for details); "Attention MLP" is the second layer of the 2-layer attention MLP; and finally "Word" and "Logits" refer to the word embedding matrix E w in Eq. 5 and E o in Eq. 3 respectively. From the figures, we can see that our proposed pruning method consistently prune "attention MLP" layer the least. This is followed by "LSTM kernel" and "Value" layers where they generally receive lesser pruning compared to others. On the flip side, "Key" and "Query" layers were pruned most heavily at levels often exceeding the targeted pruning rates. Finally, "Word embedding" consistently receives more pruning than "Logits layer". This may indicate that there exists substantial information redundancy in the word embeddings matrix as noted in works such as [37,40,62]. Conclusion and Future Work In this work, we have investigated the effectiveness of model weight pruning on the task of image captioning with visual attention. In particular, we proposed an end-to-end pruning method that performs considerably better than competing methods at maintaining captioning performance while maximising compression rate. Our single-shot approach is simple and fast to use, provides good performance, and its sparsity level is easy to tune. Moreover, we have demonstrated by pruning decoder weights during training, we can find sparse models that performs better than dense counterparts while significantly reducing model size. Our results pave the way towards deployment on mobile and embedded devices due to their small size and reduced memory requirements. In the future, we wish to investigate the generalisation capability of end-to-end pruning when applied on Transformer models [61]. We would also
4,912
1908.10797
2971306187
Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded devices infeasible. Driven by this problem, many works have proposed a number of pruning methods to reduce the sizes of the RNN model. In this work, we propose an end-to-end pruning method for image captioning models equipped with visual attention. Our proposed method is able to achieve sparsity levels up to 97.5 without significant performance loss relative to the baseline (around 1 loss at 40x compression of GRU model). Our method is also simple to use and tune, facilitating faster development times for neural network practitioners. We perform extensive experiments on the popular MS-COCO dataset in order to empirically validate the efficacy of our proposed method.
Most of the recent works in network pruning focused on vision-centric classification tasks using Convolutional Neural Networks (CNNs) and occasionally RNNs. Techniques proposed include magnitude-based pruning @cite_60 @cite_9 @cite_52 and variational pruning @cite_21 @cite_19 @cite_53 . Among these, magnitude-based weight pruning have become popular due to their effectiveness and simplicity. Most notably, @cite_60 employed a combination of pruning, quantization and Huffman encoding resulting in massive reductions in model size without affecting accuracy. While unstructured sparse connectivity provides reduction in storage size, it requires sparse General Matrix-Matrix Multiply (GEMM) libraries such as cuSPARSE and SPBLAS in order to achieve accelerated inference. Motivated by existing hardware architectures optimised for dense linear algebra, many works propose techniques to prune and induce sparsity in a structured way in which entire filters are removed @cite_24 @cite_61 @cite_49 .
{ "abstract": [ "We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 x FLOPs reduction and 16.63× compression on VGG-16, with only 0.52 top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the mini-batch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.", "Neural networks can be compressed to reduce memory and computational requirements, or to increase accuracy by facilitating the use of a larger base architecture. In this paper we focus on pruning individual neurons, which can simultaneously trim model size, FLOPs, and run-time memory. To improve upon the performance of existing compression algorithms we utilize the information bottleneck principle instantiated via a tractable variational bound. Minimization of this information theoretic bound reduces the redundancy between adjacent layers by aggregating useful information into a subset of neurons that can be preserved. In contrast, the activations of disposable neurons are shut off via an attractive form of sparse regularization that emerges naturally from this framework, providing tangible advantages over traditional sparsity penalties without contributing additional tuning parameters to the energy landscape. We demonstrate state-of-the-art compression rates across an array of datasets and network architectures.", "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108x and 17.7x respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https: github.com yiwenguo Dynamic-Network-Surgery.", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.", "We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse solutions both in fully-connected and convolutional layers. This effect is similar to automatic relevance determination effect in empirical Bayes but has a number of advantages. We reduce the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy.", "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss." ], "cite_N": [ "@cite_61", "@cite_60", "@cite_9", "@cite_21", "@cite_53", "@cite_52", "@cite_24", "@cite_19", "@cite_49" ], "mid": [ "2964233199", "2119144962", "2963674932", "1826234144", "2788440360", "2963981420", "2515385951", "2582745083", "2963145730" ] }
Image Captioning with Sparse Recurrent Neural Network
Automatically generating a caption that describes an image, a problem known as image captioning, is a challenging problem where computer vision (CV) meets natural language processing (NLP). A well performing model not only has to identify the objects in the image, but also capture the semantic relationship between them, general context and the activities that they are involved in. Lastly, the model has to map the visual representation into a fully-formed sentence in a natural language such as English. A good image captioning model can have many useful applications, which include helping the visually impaired to better understand the web contents, providing descriptive annotations of website contents, and enabling better context-based image retrieval by tagging images with accurate natural language descriptions. Driven by user privacy concerns and the quest for lower user-perceived latency, deployment on edge devices away from remote servers is required. As edge devices usually have limited battery capacity and thermal limits, this presents a few key challenges in the form of storage size, power consumption and computational demands [1]. For models incorporating RNNs, on-device inference is often memory bandwidth-bound. As RNN parameters are fixed at every time step, parameter reading forms the bulk of the work [2,1]. As such, RNN pruning offers the opportunity to not only reduce the amount of memory access but also fitting the model in on-chip SRAM cache rather than off-chip DRAM memory, both of which dramatically reduce power consumption [3,4]. Similarly, sparsity patterns for pruned RNNs are fixed across time steps. This offers the potential to factorise scheduling and load balancing operations outside of the loop and enable reuse [2]. Lastly, pruning allows larger RNNs to be stored in memory and trained [2,5] In this work, we propose a one-shot end-to-end pruning method to produce very sparse image captioning decoders (up to 97.5% sparsity) while maintaining good performance relative to the dense baseline model as well as competing methods. We detail our contributions in the following section (Sec. 2.2). Model pruning Modern neural networks that provide good performance tend to be large and overparameterised, fuelled by observations that larger [6,7,8] networks tend to be easier to train. This in turn drives numerous efforts to reduce model size using techniques such as weight pruning and quantisation [9,10,11]. Early works like [12] and [13] explored pruning by computing the Hessian of the loss with respect to the parameters in order to assess the saliency of each parameter. Other works involving saliency computation include [14] and [15] where sensitivity of the loss with respect to neurons and weights are used respectively. On the other hand, works such as [16,17] directly induce network sparsity by incorporating sparsity-enforcing penalty terms into the loss function. Most of the recent works in network pruning focused on vision-centric classification tasks using Convolutional Neural Networks (CNNs) and occasionally RNNs. Techniques proposed include magnitude-based pruning [3,4,18] and variational pruning [19,20,21]. Among these, magnitude-based weight pruning have become popular due to their effectiveness and simplicity. Most notably, [3] employed a combination of pruning, quantization and Huffman encoding resulting in massive reductions in model size without affecting accuracy. While unstructured sparse connectivity provides reduction in storage size, it requires sparse General Matrix-Matrix Multiply (GEMM) libraries such as cuSPARSE and SPBLAS in order to achieve accelerated inference. Motivated by existing hardware architectures optimised for dense linear algebra, many works propose techniques to prune and induce sparsity in a structured way in which entire filters are removed [22,23,24]. On the other hand, works extending connection pruning to RNN networks are considerably fewer [25,2,1,26]. See et al. [25] first explored magnitude-based pruning applied to deep multi-layer neural machine translation (NMT) model with Long-Short Term Memory (LSTM) [27]. In their work, three pruning schemes are evaluated which include class-blind, class-uniform and class-distribution. Class-blind pruning was found to produce the best result compared to the other two schemes. Narang et al. [2] introduced a gradual magnitude-based pruning scheme for speech recognition RNNs whereby all the weights in a layer less than some chosen threshold are pruned. Gradual pruning is performed in parallel with network training while pruning rate is controlled by a slope function with two distinct phases. This is extended by Zhu and Gupta [1] who simplified the gradual pruning scheme with reduced hyperparameters. Our contribution Our proposed end-to-end pruning method possesses three main qualities: i) Simple and fast. Our approach enables easy pruning of the RNN decoder equipped with visual attention, whereby the best number of weights to prune in each layer is automatically determined. Compared to works such as [1,2], our approach is simpler with 1 to 2 hyperparameters versus 3 to 4 hyperparameters. Our method also does not rely on reinforcement learning techniques such as in the work of [28]. Moreover, our method applies pruning to all the weights in the RNN decoder and does not require special considerations to exclude pruning from certain weight classes. Lastly our method completes pruning in a single-shot process rather than requiring iterative train-and-prune process as in [29,30,31,32]. ii) Good performance-to-sparsity ratio enabling very high sparsity. Our approach achieves good performance across sparsity levels from 80% up until 97.5% (40× reduction in Number of Non-zeros (NNZ) parameters). This is in contrast with competing methods [1,25] where there is a significant performance drop-off starting at sparsity level of 90%. iii) Easily tunable sparsity level. Our approach provides a way for neural network practitioners to easily control the level of sparsity and compression desired. This allows for model solutions that are tailored for each particular scenario. In contrast, while the closely related works of [33,34] also provide good performance with the incorporation of gating variables, there is not a straightforward way of controlling the final sparsity level. In their works, regularisers such as bi-modal, l 2 , l 1 and l 0 regulariser are used to encourage network sparsity. Their work also only focuses on image classification using CNNs. While there are other works on compressing RNNs, most of the methods proposed either comes with structural constraints or are complementary to model pruning in principle. Examples include using low-rank matrix factorisations [35,36], product quantisation on embeddings [37], factorising word predictions into multiple time steps [38,39,40], and grouping RNNs [41]. Lastly, another closely related work by [30] also incorporated model pruning into image captioning. However we note three notable differences: 1) their work is focused on proposing a new LSTM cell structure named the H-LSTM; 2) their work utilises the grow-and-prune (GP) method [31] which necessitates compute and time expensive iterative pruning; and 3) the compression figures stated are calculated based on the size of the LSTM cells instead of the entire decoder. Proposed Method Our proposed method involves incorporating learnable gating parameters into regular image captioning framework. We denote weight, bias and gating matrices as W , B and G respectively. For a model with L layers, the captioning and gating parameters are denoted as θ and φ such that θ = {W 1:L , B 1:L } and φ = {G 1:L }. As there are substantial existing works focusing on pruning CNNs, we focus our efforts on pruning generative RNNs. As such, we only prune the RNN decoder. All model size calculations in this work include only the decoder (including attention module) while the encoder (i.e. CNN) is excluded. Image captioning with visual attention Our image captioning framework of interest is a simplified variant of the Show, Attend and Tell [42] model which uses a single layer RNN network equipped with visual attention on the CNN feature map. It is a popular framework that forms the basis for subsequent state-of-the-art (SOTA) works on image captioning [43,44]. In this work, we employ LSTM and Gated Recurrent Unit (GRU) [45] as the RNN cell. Suppose {S 0 , ... , S T −1 } is a sequence of words in a sentence of length T , the model directly maximises the probability of the correct description given an image I using the following formulation: Detailed explanation for (a) is given in Sec. 5.6. In (b), "Weighted annealed loss" refers to λ s L s in Eq. 14 while "Loss" refers to L s before applying cosine annealing in Eq. 11. log p (S | I) = T t = 0 log p (S t | I, S 0 : t−1 , c t )(1) where t is the time step, p (S t | I, S 0 : t−1 , c t ) is the probability of generating a word given an image I, previous words S 0 : t−1 , and context vector c t . For a RNN network with r units, the hidden state of RNN is initialised with the image embedding vector as follows: h t=−1 = W I I embed , m t=−1 = 0 (2) where W I ∈ R r×h is a weight matrix and h is the size of I embed . The attention function used in this work is soft-attention introduced by [46] and used in [42], where a multilayer perceptron (MLP) with a single hidden layer is employed to calculate the attention weights on a particular feature map. The context vector c t is then concatenated with previous predicted word embedding to serve as input to the RNN. Finally, a probability distribution over the vocabulary is produced from the hidden state h t : p t = Softmax (E o h t ) (3) h t , m t = RNN (x t , h t−1 , m t−1 ) (4) x t = [E w S t−1 , c t ] (5) c t = SoftAtt (f )(6) where E w ∈ R q×v and E o ∈ R v×r are input and output embedding matrices respectively; p t is the probability distribution over the vocabulary V ; m t is the memory state; x t is the current input; S t−1 ∈ R q is the one-hot vector of previous word; c t ∈ R a is the context vector; f is the CNN feature map; and [ , ] is the concatenation operator. For GRU, all m t terms are ignored. Finally, the standard cross-entropy loss function for the captioning model θ is given by: L c = − T t log p t (S t ) + λ d θ 2 2 (7) End-to-end pruning Formulation. Similar to [1], TensorFlow framework is extended to prune network connections during training. Inspired by the concept of learnable Supermasks introduced by [33,47], our proposed method achieves model pruning via learnable gating variables that are trained in an end-to-end fashion. An overview of our method is illustrated in Fig. 1. For every weight variable matrix W to be pruned, we create a gating variable matrix G with the same shape as W . This gating matrix G functions as a masking mechanism that determines which of the parameter w in the weight matrix W participates in both forward-execution and back-propagation of the graph. To achieve this masking effect, we calculate the effective weight tensor as follows: W l = W l G b l (8) G b l = z(σ(G l ))(9) where W l , G l ∈ R D are the original weight and gating matrices from layer l with shape D; and superscript (·) b indicates binary sampled variables. is element-wise multiplication; σ(·) is a point-wise function that transforms continuous values into the interval (0, 1); and z(·) is a point-wise function that samples from a Bernoulli distribution. The composite function z(σ(·)) thus effectively transforms continuous values into binary values. Binary gating matrices G b can be obtained by treating σ(G) as Bernoulli random variables. While there are many possible choices for the σ function, we decided to use the logistic sigmoid function following [48] and [47]. To sample from the Bernoulli distribution, we can either perform a unbiased draw or a maximum-likelihood (ML) draw [33]. Unbiased draw is the usual sampling process where a gating value g ∈ (0, 1) is binarised to 1.0 with probability g and 0.0 otherwise, whereas ML draw involves thresholding the value g at 0.5. In this work, we denote unbiased and ML draw using the sampling functions z(·) = Bern(·) and z(·) = Round(·) respectively. We back-propagate through both sampling functions using the straight-through estimator [48] (i.e. δ z(g)/δg = 1). Prior to training, all the gating variables are initialised to the same constant value m while the weights and biases of the network are initialised using standard initialisation schemes (e.g. Xavier [49]). During training, both sampling functions Bern(·) and Round(·) are used in different ways. To obtain the effective weight tensor used to generate network activations, we utilised Bern(·) to inject some stochasticity that helps with training and to mitigate the bias arising from the constant value initialisation. Thus the effective weight calculation becomes: W l = W l Bern(σ(G l ))(10) To drive the sparsity level of gating variables φ to the user-specified level s target , we introduce a regularisation term L s . Consistent with the observations in the works of [1] and [32], we found that annealing the loss over the course of training produces the best result. Annealing is done using a cosine curve α defined in Eq. 12. To ensure determinism when calculating sparsity, we use Round(·) to sample from σ(G): L s = (1 − α) × s target − 1 − p nnz p total (11) α = 1 2 1 + cos nπ n max (12) p nnz = L l = 0 J j = 0 Round(σ(g j,l ))(13) where p nnz is the number of NNZ gating parameters; p total is the total number of gating parameters; n and n max is the current and final training step respectively; g j,l is the gating parameter at position j in the matrix G l from layer l; L is the number of layers; and J is the number of parameters in matrix G l . The progression of sparsity loss L s as well as the sparsity levels of various layers in the decoder are illustrated in Fig. 2b and 2a respectively. The final objective function used to train the captioning model θ with gating variables φ is: L ( I, S, s target ) = L c + λ s L s(14) Intuitively, the captioning loss term L c provides supervision for learning of the saliency of each parameter where important parameters are retained with higher probability while unimportant ones are dropped more frequently. On the other hand, the sparsity regularisation term L s pushes down the average value of the Bernoulli gating parameters so that most of them have a value less than 0.5 after sigmoid activation. The hyperparameter λ s determines the weightage of L s . If λ s is too low, the target sparsity level might not be attained; whereas high values might slightly affect performance (see Sec. 5.1). Training and Inference. The training process of the captioning model is divided into two distinct stages: decoder training and end-to-end fine-tuning. During the decoder training stage, we freeze the CNN parameters and only learn decoder and gating parameters by optimising the loss given in Eq. 14. For the fine-tuning stage, we restore all the parameters θ and φ from the last checkpoint at the end of decoder training and optimise the entire model including the CNN. During this stage, Bern(·) is still used but all φ parameters are frozen. After training is completed, all the weight matrices W 1:L is transformed into sparse matrices by sampling from G 1:L using Round(·), after which G can be discarded. In other words, the final weights W f are calculated as: W f l = W l Round(σ(G l ))(15) Experiment Setup Unless stated otherwise, all experiments have the following configurations. We did not perform extensive hyperparameter search due to limited resources. Hyperparameters Models are implemented using TensorFlow r1.9. The image encoder used in this work is GoogLeNet (InceptionV1) with batch normalisation [50,51] pre-trained on ImageNet [52]. The input images are resized to 256 × 256, then randomly flipped and cropped to 224 × 224 before being fed to the CNN. The attention function SoftAtt(·) operates on the Mixed-4f map f ∈ R 196×832 . The size of context vector c t and attention MLP is set to a = 512. A single layer LSTM or GRU network with hidden state size of r = 512 is used. The word size is set to q = 256 dimensions. The optimiser used for decoder training is Adam [53], with batch size of 32. The initial learning rate (LR) is set to 1 × 10 −2 , and annealed using the cosine curve α defined in Eq. 12, ending at 1 × 10 −5 . All models are trained for 30 epochs. Weight decay rate is set to λ d = 1 × 10 −5 . For fine-tuning, a smaller initial LR of 1 × 10 −3 is used and the entire model is trained for 10 epochs. Captioning model parameters are initialised randomly using Xavier uniform initialisation [49]. The input and output dropout rates for dense RNN are both set to 0.35, while the attention map dropout rate is set to 0.1. Following [4,2], a lower dropout rate is used for sparse networks where RNN and attention dropout rates are set to 0.11 and 0.03 respectively. This is done to account for the reduced capacity of the sparse models. For fair comparison, we apply pruning to all weights of the captioning model for all of the pruning schemes. For our proposed method, we train the gating variables φ with a higher constant LR of 100 without annealing, which is consistent with [47]. We found that LR lower than 100 causes φ to train too slowly. We set λ s according to this heuristic: λ s = max(5, 0.5/(1 − s target )). All gating parameters φ are initialised to a constant m = 5.0, see Sec. 5.1 for other values. For gradual pruning [1], pruning is started after first epoch is completed and ended at the end of epoch 15, following the general heuristics outlined in [2]. Pruning frequency is set to 1000. We use the standard scheme where each layer is pruned to the same pruning ratio at every step. For hard pruning [25], pruning is applied to the dense baseline model after training is completed. Retraining is then performed for 10 epochs. LR and annealing schedule are the same as used for dense baseline. For inference, beam search is used in order to better approximate S = arg max S p(S | I). Beam size is set to b = 3 with no length normalisation. We evaluate the last checkpoint upon completion of training for all the experiments. We denote compression ratio as CR. Dataset The experiments are performed on the popular MS-COCO dataset [54]. It is a public English captioning dataset which contains 123, 287 images and each image is given at least 5 captions by different Amazon Mechanical Turk (AMT) workers. As there is no official test split with annotations available, the publicly available split 4 in the work of [55] is used in this work. The split assigns 5, 000 images for validation, another 5, 000 for testing and the rest for training. We reuse the publicly available tokenised captions. Words that occur less than 5 times are filtered out and sentences longer than 20 words are truncated. All the scores are obtained using the publicly available MS-COCO evaluation toolkit 5 , which computes BLEU [56], METEOR [57], ROUGE-L [58], CIDEr [59] and SPICE [60]. For sake of brevity, we label BLEU-1 to BLEU-4 as B-1 to B-4, and METEOR, ROUGE-L, CIDEr, SPICE as M, R, C, S respectively. Table 1 shows the effect of various gating initialisation values. From the table, we can see that the best overall performance is achieved when m is set to 5. Starting the gating parameters at a value of 5 allows all the captioning parameters θ to be retained with high probability at the early stages of training, allowing better convergence. This observation is also consistent with the works of [1] and [32], where the authors found that gradual pruning and late resetting can lead to better model performance. Thus, we recommend setting m = 5.0. Table 2 shows the effect of sparsity regularisation weightage λ s . This is the important hyperparameter that could affect the final sparsity level at convergence. From the results, we can see that low values lead to insufficient sparsity, and higher sparsity target s target requires higher λ s . For image captioning on MS-COCO, we empirically determined that the heuristic given in Sec. 4.1 works sufficiently well for sparsity levels from 80% to 97.5% (see Table 3 and 4. Experiments and Discussion Ablation study Comparison with RNN pruning methods In this section, we provide extensive comparisons of our proposed method with the dense baselines as well as competing methods at multiple sparsity levels. All the models have been verified to have achieved the targeted sparsity levels. From Table 3 and 4, we can clearly see that our proposed end-to-end pruning provides good performance when compared to the dense baselines. This is true even at high pruning ratios of 90% and 95%. The relative drops in BLEU-4 and CIDEr scores are only −1.0% to −2.9% and −1.3% to −2.9% while having 10 − 20× fewer NNZ parameters. This is in contrast with competing methods whose performance drops are double or even triple compared to ours, especially for LSTM. The performance advantage provided by end-to-end pruning is even more apparent at the high pruning ratio of 97.5%, offering a big 40× reduction in NNZ parameters. Even though we suffered relative degradations of −4.8% to −6.4% in BLEU-4 and CIDEr scores compared to baselines, our performance is still significantly better than the next-closest method which is gradual pruning. On the other hand, the performance achieved by our 80% pruned models are extremely close to that of baselines. Our sparse LSTM model even very slightly outperforms the baseline on some metrics, although we note that the standard deviation for CIDEr score across training runs is around 0.3 to 0.9. Among the competing methods, we can see that gradual pruning usually outperforms hard pruning, especially at high sparsities of 95% and 97.5%. That being said, we can see that class- Table 4: Comparison with dense GRU baseline and competing methods. Bold text indicates best overall performance. "Gradual" and "Hard" denote methods proposed in [1] and [25]. blind hard pruning is able to produce good results at moderate pruning rates of 80% and 90%, even outperforming gradual pruning. This is especially true for the GRU captioning model where it outperforms all other methods briefly at 90% sparsity, however we note that its performance on LSTM is generally lower. In contrast, our proposed approach achieves good performance on both LSTM and GRU models. All in all, these results showcase the strength of our proposed method. Across pruning ratios from 80% to 97.5%, our approach consistently maintain relatively good performance when compared to the dense baselines while outperforming magnitude-based gradual and hard pruning methods in most cases. Effect of fine-tuning In this section, we investigate the potential impact of fine-tuning the entire captioning model in an end-to-end manner. From Table 5, we can see that model fine-tuning has a performancerecovering effect on the sparse models. This phenomenon is especially apparent on very sparse models with sparsity at 97.5%. On both LSTM and GRU models, the drops in performance suffered due to pruning have mostly reduced except for LSTM at 80% sparsity. Notably, all the pruned models have remarkably similar performance from 80% sparsity up until 97.5%. The score gap between dense and sparse GRU models are exceedingly small, ranging from +1.2% to −1.9% for both BLEU-4 and CIDEr. For LSTM models, even though the score gap is slightly larger at −0.9% to −2.5% on both BLEU-4 and CIDEr, it is still considerably smaller than without CNN fine-tuning (Table 3). These results suggest that the Inception-V1 CNN pre-trained on ImageNet is not optimised to provide useful features for sparse decoders. As such, end-to-end fine-tuning together with sparse decoder allows features extracted by the CNN to be adapted where useful semantic information can be propagated through surviving connections in the decoder. We also provided compression and performance comparison with the closely related work of [30] who utilised GP [31] method to produce sparse H-LSTM for image captioning. For fairness, we also provide scores obtained at CR of 40× using beam size of 2 instead of 3. From the table, we can see that at overall CR of 20× to 40×, our sparse models are able to outperform H-LSTM with lower NNZ parameters. This indicates that the effectiveness of our one-shot approach is at least comparable to the iterative process of grow-and-prune. Large-sparse versus small-dense In this section, we show that a large sparse LSTM image captioning model produced via endto-end pruning is able to outperform a smaller dense LSTM trained normally. The small-dense model denoted as LSTM-S has a word embedding size of q = 64 dimensions, LSTM size of r = 128 units and finally attention MLP size of a = 96 units. The results are given in Table 6. From the results, we can see that the small-dense model with 5× fewer parameters performs considerably worse than all the large-sparse models LSTM-M across the board. Notably, we can see that the large-sparse LSTM-M model with 40× fewer NNZ parameters still managed to outperform LSTM-S with a considerable margin. At equal NNZ parameters, the large-sparse model comfortably outperforms the small-dense model. This showcases further the strength of model pruning and solidifies the observations made in works on RNN pruning [2, 1]. Caption uniqueness and length In this section, we explore the potential effects of our proposed end-to-end pruning on the uniqueness and length of the generated captions. As pruning reduces the complexity and capacity of the decoder considerably, we wish to see if the sparse models show any signs of training data memorisation and hence potentially overfitting. In such cases, uniqueness of the generated captions would decrease as the decoder learns to simply repeat captions available in the training set. A generated caption is considered to be unique if it is not found in the training set. From Table 7, we can see that despite the heavy reductions in NNZ parameters, the uniqueness of generated captions have not decreased. On the contrary, more unseen captions are being generated at higher levels of sparsity and compression. On the other hand, we can see that the average lengths of generated captions peaked at 80% sparsity in most cases and then decrease slightly as sparsity increase. That being said, the reductions in caption length are minimal (+0.5% to −2.3%) considering the substantial decoder compression rates of up to 40×. Together with the good performance shown in Table 3 and 4, these results indicate that our approach is able to maintain both the variability of generated captions and their quality as measured by the metric scores. Layer-wise sparsity comparison Finally, we visualise the pruning ratio of each decoder layers when pruned using the different methods listed in Sec. 5.2. Among the approaches, both gradual and class-uniform pruning produces the same sparsity level across all the layers. To better showcase the differences in layer-wise pruning ratios, we decided to visualise two opposite ends in which the first has a relatively moderate sparsity of 80% while the other has a high sparsity of 97.5%. In both Fig. 3a and 3b, we denote the decoder layers as follows: "RNN initial state" refers to W I in Eq. 2; "LSTM kernel" is the concatenation of all gate kernels in LSTM (i.e. input, output, forget, cell); "Key", "Value" and "Query" layers refer to projection layers in the attention module (see [61] for details); "Attention MLP" is the second layer of the 2-layer attention MLP; and finally "Word" and "Logits" refer to the word embedding matrix E w in Eq. 5 and E o in Eq. 3 respectively. From the figures, we can see that our proposed pruning method consistently prune "attention MLP" layer the least. This is followed by "LSTM kernel" and "Value" layers where they generally receive lesser pruning compared to others. On the flip side, "Key" and "Query" layers were pruned most heavily at levels often exceeding the targeted pruning rates. Finally, "Word embedding" consistently receives more pruning than "Logits layer". This may indicate that there exists substantial information redundancy in the word embeddings matrix as noted in works such as [37,40,62]. Conclusion and Future Work In this work, we have investigated the effectiveness of model weight pruning on the task of image captioning with visual attention. In particular, we proposed an end-to-end pruning method that performs considerably better than competing methods at maintaining captioning performance while maximising compression rate. Our single-shot approach is simple and fast to use, provides good performance, and its sparsity level is easy to tune. Moreover, we have demonstrated by pruning decoder weights during training, we can find sparse models that performs better than dense counterparts while significantly reducing model size. Our results pave the way towards deployment on mobile and embedded devices due to their small size and reduced memory requirements. In the future, we wish to investigate the generalisation capability of end-to-end pruning when applied on Transformer models [61]. We would also
4,912
1908.10797
2971306187
Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded devices infeasible. Driven by this problem, many works have proposed a number of pruning methods to reduce the sizes of the RNN model. In this work, we propose an end-to-end pruning method for image captioning models equipped with visual attention. Our proposed method is able to achieve sparsity levels up to 97.5 without significant performance loss relative to the baseline (around 1 loss at 40x compression of GRU model). Our method is also simple to use and tune, facilitating faster development times for neural network practitioners. We perform extensive experiments on the popular MS-COCO dataset in order to empirically validate the efficacy of our proposed method.
[label= *)] Simple and fast. Our approach enables easy pruning of the RNN decoder equipped with visual attention, whereby the best number of weights to prune in each layer is automatically determined. Compared to works such as @cite_45 @cite_51 , our approach is simpler with a single hyperparameter versus @math - @math hyperparameters. Our method also does not rely on reinforcement learning techniques such as in the work of @cite_4 . Moreover, our method applies pruning to all the weights in the RNN decoder and does not require special considerations to exclude pruning from certain weight classes. Lastly our method completes pruning in a single-shot process rather than requiring iterative train-and-prune process as in @cite_17 @cite_18 @cite_47 @cite_50 . Good performance-to-sparsity ratio enabling extreme sparsity. Our approach achieves good performance across sparsity levels from @math l_2 @math l_1 @math l_0$ regulariser are used to encourage network sparsity. Their work also only focuses on image classification using CNNs.
{ "abstract": [ "Long short-term memory (LSTM) has been widely used for sequential data modeling. Researchers have increased LSTM depth by stacking LSTM cells to improve performance. This incurs model redundancy, increases run-time delay, and makes the LSTMs more prone to overfitting. To address these problems, we propose a hidden-layer LSTM (H-LSTM) that adds hidden layers to LSTM's original one level non-linear control gates. H-LSTM increases accuracy while employing fewer external stacked layers, thus reducing the number of parameters and run-time latency significantly. We employ grow-and-prune (GP) training to iteratively adjust the hidden layers through gradient-based growth and magnitude-based pruning of connections. This learns both the weights and the compact architecture of H-LSTM control gates. We have GP-trained H-LSTMs for image captioning and speech recognition applications. For the NeuralTalk architecture on the MSCOCO dataset, our three models reduce the number of parameters by 38.7x [floating-point operations (FLOPs) by 45.5x], run-time latency by 4.5x, and improve the CIDEr score by 2.6. For the DeepSpeech2 architecture on the AN4 dataset, our two models reduce the number of parameters by 19.4x (FLOPs by 23.5x), run-time latency by 15.7 , and the word error rate from 12.9 to 8.7 . Thus, GP-trained H-LSTMs can be seen to be compact, fast, and accurate.", "Model compression is an effective technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted features and require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality. We achieved state-of-the-art model compression results in a fully automated way without any human efforts. Under 4 ( ) FLOPs reduction, we achieved 2.7 better accuracy than the hand-crafted model compression method for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet-V1 and achieved a speedup of 1.53 ( ) on the GPU (Titan Xp) and 1.95 ( ) on an Android phone (Google Pixel 1), with negligible loss of accuracy.", "Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (, 2015; , 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.", "The lottery ticket hypothesis proposes that over-parameterization of deep neural networks (DNNs) aids training by increasing the probability of a \"lucky\" sub-network initialization being present rather than by helping the optimization process. This phenomenon is intriguing and suggests that initialization strategies for DNNs can be improved substantially, but the lottery ticket hypothesis has only previously been tested in the context of supervised learning for natural image tasks. Here, we evaluate whether \"winning ticket\" initializations exist in two different domains: reinforcement learning (RL) and in natural language processing (NLP). For RL, we analyzed a number of discrete-action space tasks, including both classic control and pixel control. For NLP, we examined both recurrent LSTM models and large-scale Transformer models. Consistent with work in supervised image classification, we confirm that winning ticket initializations generally outperform parameter-matched random initializations, even at extreme pruning rates. Together, these results suggest that the lottery ticket hypothesis is not restricted to supervised learning of natural images, but rather represents a broader phenomenon in DNNs.", "", "Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to evaluate it. In order to deploy these RNNs efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8x and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90 and speed-up is around 2x to 7x.", "Neural network pruning techniques can reduce the parameter counts of trained networks by over 90 , decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the \"lottery ticket hypothesis:\" dense, randomly-initialized, feed-forward networks contain subnetworks (\"winning tickets\") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20 of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy." ], "cite_N": [ "@cite_18", "@cite_4", "@cite_45", "@cite_50", "@cite_47", "@cite_51", "@cite_17" ], "mid": [ "2806862281", "2886851211", "2952344559", "2948130861", "", "2608554408", "2805003733" ] }
Image Captioning with Sparse Recurrent Neural Network
Automatically generating a caption that describes an image, a problem known as image captioning, is a challenging problem where computer vision (CV) meets natural language processing (NLP). A well performing model not only has to identify the objects in the image, but also capture the semantic relationship between them, general context and the activities that they are involved in. Lastly, the model has to map the visual representation into a fully-formed sentence in a natural language such as English. A good image captioning model can have many useful applications, which include helping the visually impaired to better understand the web contents, providing descriptive annotations of website contents, and enabling better context-based image retrieval by tagging images with accurate natural language descriptions. Driven by user privacy concerns and the quest for lower user-perceived latency, deployment on edge devices away from remote servers is required. As edge devices usually have limited battery capacity and thermal limits, this presents a few key challenges in the form of storage size, power consumption and computational demands [1]. For models incorporating RNNs, on-device inference is often memory bandwidth-bound. As RNN parameters are fixed at every time step, parameter reading forms the bulk of the work [2,1]. As such, RNN pruning offers the opportunity to not only reduce the amount of memory access but also fitting the model in on-chip SRAM cache rather than off-chip DRAM memory, both of which dramatically reduce power consumption [3,4]. Similarly, sparsity patterns for pruned RNNs are fixed across time steps. This offers the potential to factorise scheduling and load balancing operations outside of the loop and enable reuse [2]. Lastly, pruning allows larger RNNs to be stored in memory and trained [2,5] In this work, we propose a one-shot end-to-end pruning method to produce very sparse image captioning decoders (up to 97.5% sparsity) while maintaining good performance relative to the dense baseline model as well as competing methods. We detail our contributions in the following section (Sec. 2.2). Model pruning Modern neural networks that provide good performance tend to be large and overparameterised, fuelled by observations that larger [6,7,8] networks tend to be easier to train. This in turn drives numerous efforts to reduce model size using techniques such as weight pruning and quantisation [9,10,11]. Early works like [12] and [13] explored pruning by computing the Hessian of the loss with respect to the parameters in order to assess the saliency of each parameter. Other works involving saliency computation include [14] and [15] where sensitivity of the loss with respect to neurons and weights are used respectively. On the other hand, works such as [16,17] directly induce network sparsity by incorporating sparsity-enforcing penalty terms into the loss function. Most of the recent works in network pruning focused on vision-centric classification tasks using Convolutional Neural Networks (CNNs) and occasionally RNNs. Techniques proposed include magnitude-based pruning [3,4,18] and variational pruning [19,20,21]. Among these, magnitude-based weight pruning have become popular due to their effectiveness and simplicity. Most notably, [3] employed a combination of pruning, quantization and Huffman encoding resulting in massive reductions in model size without affecting accuracy. While unstructured sparse connectivity provides reduction in storage size, it requires sparse General Matrix-Matrix Multiply (GEMM) libraries such as cuSPARSE and SPBLAS in order to achieve accelerated inference. Motivated by existing hardware architectures optimised for dense linear algebra, many works propose techniques to prune and induce sparsity in a structured way in which entire filters are removed [22,23,24]. On the other hand, works extending connection pruning to RNN networks are considerably fewer [25,2,1,26]. See et al. [25] first explored magnitude-based pruning applied to deep multi-layer neural machine translation (NMT) model with Long-Short Term Memory (LSTM) [27]. In their work, three pruning schemes are evaluated which include class-blind, class-uniform and class-distribution. Class-blind pruning was found to produce the best result compared to the other two schemes. Narang et al. [2] introduced a gradual magnitude-based pruning scheme for speech recognition RNNs whereby all the weights in a layer less than some chosen threshold are pruned. Gradual pruning is performed in parallel with network training while pruning rate is controlled by a slope function with two distinct phases. This is extended by Zhu and Gupta [1] who simplified the gradual pruning scheme with reduced hyperparameters. Our contribution Our proposed end-to-end pruning method possesses three main qualities: i) Simple and fast. Our approach enables easy pruning of the RNN decoder equipped with visual attention, whereby the best number of weights to prune in each layer is automatically determined. Compared to works such as [1,2], our approach is simpler with 1 to 2 hyperparameters versus 3 to 4 hyperparameters. Our method also does not rely on reinforcement learning techniques such as in the work of [28]. Moreover, our method applies pruning to all the weights in the RNN decoder and does not require special considerations to exclude pruning from certain weight classes. Lastly our method completes pruning in a single-shot process rather than requiring iterative train-and-prune process as in [29,30,31,32]. ii) Good performance-to-sparsity ratio enabling very high sparsity. Our approach achieves good performance across sparsity levels from 80% up until 97.5% (40× reduction in Number of Non-zeros (NNZ) parameters). This is in contrast with competing methods [1,25] where there is a significant performance drop-off starting at sparsity level of 90%. iii) Easily tunable sparsity level. Our approach provides a way for neural network practitioners to easily control the level of sparsity and compression desired. This allows for model solutions that are tailored for each particular scenario. In contrast, while the closely related works of [33,34] also provide good performance with the incorporation of gating variables, there is not a straightforward way of controlling the final sparsity level. In their works, regularisers such as bi-modal, l 2 , l 1 and l 0 regulariser are used to encourage network sparsity. Their work also only focuses on image classification using CNNs. While there are other works on compressing RNNs, most of the methods proposed either comes with structural constraints or are complementary to model pruning in principle. Examples include using low-rank matrix factorisations [35,36], product quantisation on embeddings [37], factorising word predictions into multiple time steps [38,39,40], and grouping RNNs [41]. Lastly, another closely related work by [30] also incorporated model pruning into image captioning. However we note three notable differences: 1) their work is focused on proposing a new LSTM cell structure named the H-LSTM; 2) their work utilises the grow-and-prune (GP) method [31] which necessitates compute and time expensive iterative pruning; and 3) the compression figures stated are calculated based on the size of the LSTM cells instead of the entire decoder. Proposed Method Our proposed method involves incorporating learnable gating parameters into regular image captioning framework. We denote weight, bias and gating matrices as W , B and G respectively. For a model with L layers, the captioning and gating parameters are denoted as θ and φ such that θ = {W 1:L , B 1:L } and φ = {G 1:L }. As there are substantial existing works focusing on pruning CNNs, we focus our efforts on pruning generative RNNs. As such, we only prune the RNN decoder. All model size calculations in this work include only the decoder (including attention module) while the encoder (i.e. CNN) is excluded. Image captioning with visual attention Our image captioning framework of interest is a simplified variant of the Show, Attend and Tell [42] model which uses a single layer RNN network equipped with visual attention on the CNN feature map. It is a popular framework that forms the basis for subsequent state-of-the-art (SOTA) works on image captioning [43,44]. In this work, we employ LSTM and Gated Recurrent Unit (GRU) [45] as the RNN cell. Suppose {S 0 , ... , S T −1 } is a sequence of words in a sentence of length T , the model directly maximises the probability of the correct description given an image I using the following formulation: Detailed explanation for (a) is given in Sec. 5.6. In (b), "Weighted annealed loss" refers to λ s L s in Eq. 14 while "Loss" refers to L s before applying cosine annealing in Eq. 11. log p (S | I) = T t = 0 log p (S t | I, S 0 : t−1 , c t )(1) where t is the time step, p (S t | I, S 0 : t−1 , c t ) is the probability of generating a word given an image I, previous words S 0 : t−1 , and context vector c t . For a RNN network with r units, the hidden state of RNN is initialised with the image embedding vector as follows: h t=−1 = W I I embed , m t=−1 = 0 (2) where W I ∈ R r×h is a weight matrix and h is the size of I embed . The attention function used in this work is soft-attention introduced by [46] and used in [42], where a multilayer perceptron (MLP) with a single hidden layer is employed to calculate the attention weights on a particular feature map. The context vector c t is then concatenated with previous predicted word embedding to serve as input to the RNN. Finally, a probability distribution over the vocabulary is produced from the hidden state h t : p t = Softmax (E o h t ) (3) h t , m t = RNN (x t , h t−1 , m t−1 ) (4) x t = [E w S t−1 , c t ] (5) c t = SoftAtt (f )(6) where E w ∈ R q×v and E o ∈ R v×r are input and output embedding matrices respectively; p t is the probability distribution over the vocabulary V ; m t is the memory state; x t is the current input; S t−1 ∈ R q is the one-hot vector of previous word; c t ∈ R a is the context vector; f is the CNN feature map; and [ , ] is the concatenation operator. For GRU, all m t terms are ignored. Finally, the standard cross-entropy loss function for the captioning model θ is given by: L c = − T t log p t (S t ) + λ d θ 2 2 (7) End-to-end pruning Formulation. Similar to [1], TensorFlow framework is extended to prune network connections during training. Inspired by the concept of learnable Supermasks introduced by [33,47], our proposed method achieves model pruning via learnable gating variables that are trained in an end-to-end fashion. An overview of our method is illustrated in Fig. 1. For every weight variable matrix W to be pruned, we create a gating variable matrix G with the same shape as W . This gating matrix G functions as a masking mechanism that determines which of the parameter w in the weight matrix W participates in both forward-execution and back-propagation of the graph. To achieve this masking effect, we calculate the effective weight tensor as follows: W l = W l G b l (8) G b l = z(σ(G l ))(9) where W l , G l ∈ R D are the original weight and gating matrices from layer l with shape D; and superscript (·) b indicates binary sampled variables. is element-wise multiplication; σ(·) is a point-wise function that transforms continuous values into the interval (0, 1); and z(·) is a point-wise function that samples from a Bernoulli distribution. The composite function z(σ(·)) thus effectively transforms continuous values into binary values. Binary gating matrices G b can be obtained by treating σ(G) as Bernoulli random variables. While there are many possible choices for the σ function, we decided to use the logistic sigmoid function following [48] and [47]. To sample from the Bernoulli distribution, we can either perform a unbiased draw or a maximum-likelihood (ML) draw [33]. Unbiased draw is the usual sampling process where a gating value g ∈ (0, 1) is binarised to 1.0 with probability g and 0.0 otherwise, whereas ML draw involves thresholding the value g at 0.5. In this work, we denote unbiased and ML draw using the sampling functions z(·) = Bern(·) and z(·) = Round(·) respectively. We back-propagate through both sampling functions using the straight-through estimator [48] (i.e. δ z(g)/δg = 1). Prior to training, all the gating variables are initialised to the same constant value m while the weights and biases of the network are initialised using standard initialisation schemes (e.g. Xavier [49]). During training, both sampling functions Bern(·) and Round(·) are used in different ways. To obtain the effective weight tensor used to generate network activations, we utilised Bern(·) to inject some stochasticity that helps with training and to mitigate the bias arising from the constant value initialisation. Thus the effective weight calculation becomes: W l = W l Bern(σ(G l ))(10) To drive the sparsity level of gating variables φ to the user-specified level s target , we introduce a regularisation term L s . Consistent with the observations in the works of [1] and [32], we found that annealing the loss over the course of training produces the best result. Annealing is done using a cosine curve α defined in Eq. 12. To ensure determinism when calculating sparsity, we use Round(·) to sample from σ(G): L s = (1 − α) × s target − 1 − p nnz p total (11) α = 1 2 1 + cos nπ n max (12) p nnz = L l = 0 J j = 0 Round(σ(g j,l ))(13) where p nnz is the number of NNZ gating parameters; p total is the total number of gating parameters; n and n max is the current and final training step respectively; g j,l is the gating parameter at position j in the matrix G l from layer l; L is the number of layers; and J is the number of parameters in matrix G l . The progression of sparsity loss L s as well as the sparsity levels of various layers in the decoder are illustrated in Fig. 2b and 2a respectively. The final objective function used to train the captioning model θ with gating variables φ is: L ( I, S, s target ) = L c + λ s L s(14) Intuitively, the captioning loss term L c provides supervision for learning of the saliency of each parameter where important parameters are retained with higher probability while unimportant ones are dropped more frequently. On the other hand, the sparsity regularisation term L s pushes down the average value of the Bernoulli gating parameters so that most of them have a value less than 0.5 after sigmoid activation. The hyperparameter λ s determines the weightage of L s . If λ s is too low, the target sparsity level might not be attained; whereas high values might slightly affect performance (see Sec. 5.1). Training and Inference. The training process of the captioning model is divided into two distinct stages: decoder training and end-to-end fine-tuning. During the decoder training stage, we freeze the CNN parameters and only learn decoder and gating parameters by optimising the loss given in Eq. 14. For the fine-tuning stage, we restore all the parameters θ and φ from the last checkpoint at the end of decoder training and optimise the entire model including the CNN. During this stage, Bern(·) is still used but all φ parameters are frozen. After training is completed, all the weight matrices W 1:L is transformed into sparse matrices by sampling from G 1:L using Round(·), after which G can be discarded. In other words, the final weights W f are calculated as: W f l = W l Round(σ(G l ))(15) Experiment Setup Unless stated otherwise, all experiments have the following configurations. We did not perform extensive hyperparameter search due to limited resources. Hyperparameters Models are implemented using TensorFlow r1.9. The image encoder used in this work is GoogLeNet (InceptionV1) with batch normalisation [50,51] pre-trained on ImageNet [52]. The input images are resized to 256 × 256, then randomly flipped and cropped to 224 × 224 before being fed to the CNN. The attention function SoftAtt(·) operates on the Mixed-4f map f ∈ R 196×832 . The size of context vector c t and attention MLP is set to a = 512. A single layer LSTM or GRU network with hidden state size of r = 512 is used. The word size is set to q = 256 dimensions. The optimiser used for decoder training is Adam [53], with batch size of 32. The initial learning rate (LR) is set to 1 × 10 −2 , and annealed using the cosine curve α defined in Eq. 12, ending at 1 × 10 −5 . All models are trained for 30 epochs. Weight decay rate is set to λ d = 1 × 10 −5 . For fine-tuning, a smaller initial LR of 1 × 10 −3 is used and the entire model is trained for 10 epochs. Captioning model parameters are initialised randomly using Xavier uniform initialisation [49]. The input and output dropout rates for dense RNN are both set to 0.35, while the attention map dropout rate is set to 0.1. Following [4,2], a lower dropout rate is used for sparse networks where RNN and attention dropout rates are set to 0.11 and 0.03 respectively. This is done to account for the reduced capacity of the sparse models. For fair comparison, we apply pruning to all weights of the captioning model for all of the pruning schemes. For our proposed method, we train the gating variables φ with a higher constant LR of 100 without annealing, which is consistent with [47]. We found that LR lower than 100 causes φ to train too slowly. We set λ s according to this heuristic: λ s = max(5, 0.5/(1 − s target )). All gating parameters φ are initialised to a constant m = 5.0, see Sec. 5.1 for other values. For gradual pruning [1], pruning is started after first epoch is completed and ended at the end of epoch 15, following the general heuristics outlined in [2]. Pruning frequency is set to 1000. We use the standard scheme where each layer is pruned to the same pruning ratio at every step. For hard pruning [25], pruning is applied to the dense baseline model after training is completed. Retraining is then performed for 10 epochs. LR and annealing schedule are the same as used for dense baseline. For inference, beam search is used in order to better approximate S = arg max S p(S | I). Beam size is set to b = 3 with no length normalisation. We evaluate the last checkpoint upon completion of training for all the experiments. We denote compression ratio as CR. Dataset The experiments are performed on the popular MS-COCO dataset [54]. It is a public English captioning dataset which contains 123, 287 images and each image is given at least 5 captions by different Amazon Mechanical Turk (AMT) workers. As there is no official test split with annotations available, the publicly available split 4 in the work of [55] is used in this work. The split assigns 5, 000 images for validation, another 5, 000 for testing and the rest for training. We reuse the publicly available tokenised captions. Words that occur less than 5 times are filtered out and sentences longer than 20 words are truncated. All the scores are obtained using the publicly available MS-COCO evaluation toolkit 5 , which computes BLEU [56], METEOR [57], ROUGE-L [58], CIDEr [59] and SPICE [60]. For sake of brevity, we label BLEU-1 to BLEU-4 as B-1 to B-4, and METEOR, ROUGE-L, CIDEr, SPICE as M, R, C, S respectively. Table 1 shows the effect of various gating initialisation values. From the table, we can see that the best overall performance is achieved when m is set to 5. Starting the gating parameters at a value of 5 allows all the captioning parameters θ to be retained with high probability at the early stages of training, allowing better convergence. This observation is also consistent with the works of [1] and [32], where the authors found that gradual pruning and late resetting can lead to better model performance. Thus, we recommend setting m = 5.0. Table 2 shows the effect of sparsity regularisation weightage λ s . This is the important hyperparameter that could affect the final sparsity level at convergence. From the results, we can see that low values lead to insufficient sparsity, and higher sparsity target s target requires higher λ s . For image captioning on MS-COCO, we empirically determined that the heuristic given in Sec. 4.1 works sufficiently well for sparsity levels from 80% to 97.5% (see Table 3 and 4. Experiments and Discussion Ablation study Comparison with RNN pruning methods In this section, we provide extensive comparisons of our proposed method with the dense baselines as well as competing methods at multiple sparsity levels. All the models have been verified to have achieved the targeted sparsity levels. From Table 3 and 4, we can clearly see that our proposed end-to-end pruning provides good performance when compared to the dense baselines. This is true even at high pruning ratios of 90% and 95%. The relative drops in BLEU-4 and CIDEr scores are only −1.0% to −2.9% and −1.3% to −2.9% while having 10 − 20× fewer NNZ parameters. This is in contrast with competing methods whose performance drops are double or even triple compared to ours, especially for LSTM. The performance advantage provided by end-to-end pruning is even more apparent at the high pruning ratio of 97.5%, offering a big 40× reduction in NNZ parameters. Even though we suffered relative degradations of −4.8% to −6.4% in BLEU-4 and CIDEr scores compared to baselines, our performance is still significantly better than the next-closest method which is gradual pruning. On the other hand, the performance achieved by our 80% pruned models are extremely close to that of baselines. Our sparse LSTM model even very slightly outperforms the baseline on some metrics, although we note that the standard deviation for CIDEr score across training runs is around 0.3 to 0.9. Among the competing methods, we can see that gradual pruning usually outperforms hard pruning, especially at high sparsities of 95% and 97.5%. That being said, we can see that class- Table 4: Comparison with dense GRU baseline and competing methods. Bold text indicates best overall performance. "Gradual" and "Hard" denote methods proposed in [1] and [25]. blind hard pruning is able to produce good results at moderate pruning rates of 80% and 90%, even outperforming gradual pruning. This is especially true for the GRU captioning model where it outperforms all other methods briefly at 90% sparsity, however we note that its performance on LSTM is generally lower. In contrast, our proposed approach achieves good performance on both LSTM and GRU models. All in all, these results showcase the strength of our proposed method. Across pruning ratios from 80% to 97.5%, our approach consistently maintain relatively good performance when compared to the dense baselines while outperforming magnitude-based gradual and hard pruning methods in most cases. Effect of fine-tuning In this section, we investigate the potential impact of fine-tuning the entire captioning model in an end-to-end manner. From Table 5, we can see that model fine-tuning has a performancerecovering effect on the sparse models. This phenomenon is especially apparent on very sparse models with sparsity at 97.5%. On both LSTM and GRU models, the drops in performance suffered due to pruning have mostly reduced except for LSTM at 80% sparsity. Notably, all the pruned models have remarkably similar performance from 80% sparsity up until 97.5%. The score gap between dense and sparse GRU models are exceedingly small, ranging from +1.2% to −1.9% for both BLEU-4 and CIDEr. For LSTM models, even though the score gap is slightly larger at −0.9% to −2.5% on both BLEU-4 and CIDEr, it is still considerably smaller than without CNN fine-tuning (Table 3). These results suggest that the Inception-V1 CNN pre-trained on ImageNet is not optimised to provide useful features for sparse decoders. As such, end-to-end fine-tuning together with sparse decoder allows features extracted by the CNN to be adapted where useful semantic information can be propagated through surviving connections in the decoder. We also provided compression and performance comparison with the closely related work of [30] who utilised GP [31] method to produce sparse H-LSTM for image captioning. For fairness, we also provide scores obtained at CR of 40× using beam size of 2 instead of 3. From the table, we can see that at overall CR of 20× to 40×, our sparse models are able to outperform H-LSTM with lower NNZ parameters. This indicates that the effectiveness of our one-shot approach is at least comparable to the iterative process of grow-and-prune. Large-sparse versus small-dense In this section, we show that a large sparse LSTM image captioning model produced via endto-end pruning is able to outperform a smaller dense LSTM trained normally. The small-dense model denoted as LSTM-S has a word embedding size of q = 64 dimensions, LSTM size of r = 128 units and finally attention MLP size of a = 96 units. The results are given in Table 6. From the results, we can see that the small-dense model with 5× fewer parameters performs considerably worse than all the large-sparse models LSTM-M across the board. Notably, we can see that the large-sparse LSTM-M model with 40× fewer NNZ parameters still managed to outperform LSTM-S with a considerable margin. At equal NNZ parameters, the large-sparse model comfortably outperforms the small-dense model. This showcases further the strength of model pruning and solidifies the observations made in works on RNN pruning [2, 1]. Caption uniqueness and length In this section, we explore the potential effects of our proposed end-to-end pruning on the uniqueness and length of the generated captions. As pruning reduces the complexity and capacity of the decoder considerably, we wish to see if the sparse models show any signs of training data memorisation and hence potentially overfitting. In such cases, uniqueness of the generated captions would decrease as the decoder learns to simply repeat captions available in the training set. A generated caption is considered to be unique if it is not found in the training set. From Table 7, we can see that despite the heavy reductions in NNZ parameters, the uniqueness of generated captions have not decreased. On the contrary, more unseen captions are being generated at higher levels of sparsity and compression. On the other hand, we can see that the average lengths of generated captions peaked at 80% sparsity in most cases and then decrease slightly as sparsity increase. That being said, the reductions in caption length are minimal (+0.5% to −2.3%) considering the substantial decoder compression rates of up to 40×. Together with the good performance shown in Table 3 and 4, these results indicate that our approach is able to maintain both the variability of generated captions and their quality as measured by the metric scores. Layer-wise sparsity comparison Finally, we visualise the pruning ratio of each decoder layers when pruned using the different methods listed in Sec. 5.2. Among the approaches, both gradual and class-uniform pruning produces the same sparsity level across all the layers. To better showcase the differences in layer-wise pruning ratios, we decided to visualise two opposite ends in which the first has a relatively moderate sparsity of 80% while the other has a high sparsity of 97.5%. In both Fig. 3a and 3b, we denote the decoder layers as follows: "RNN initial state" refers to W I in Eq. 2; "LSTM kernel" is the concatenation of all gate kernels in LSTM (i.e. input, output, forget, cell); "Key", "Value" and "Query" layers refer to projection layers in the attention module (see [61] for details); "Attention MLP" is the second layer of the 2-layer attention MLP; and finally "Word" and "Logits" refer to the word embedding matrix E w in Eq. 5 and E o in Eq. 3 respectively. From the figures, we can see that our proposed pruning method consistently prune "attention MLP" layer the least. This is followed by "LSTM kernel" and "Value" layers where they generally receive lesser pruning compared to others. On the flip side, "Key" and "Query" layers were pruned most heavily at levels often exceeding the targeted pruning rates. Finally, "Word embedding" consistently receives more pruning than "Logits layer". This may indicate that there exists substantial information redundancy in the word embeddings matrix as noted in works such as [37,40,62]. Conclusion and Future Work In this work, we have investigated the effectiveness of model weight pruning on the task of image captioning with visual attention. In particular, we proposed an end-to-end pruning method that performs considerably better than competing methods at maintaining captioning performance while maximising compression rate. Our single-shot approach is simple and fast to use, provides good performance, and its sparsity level is easy to tune. Moreover, we have demonstrated by pruning decoder weights during training, we can find sparse models that performs better than dense counterparts while significantly reducing model size. Our results pave the way towards deployment on mobile and embedded devices due to their small size and reduced memory requirements. In the future, we wish to investigate the generalisation capability of end-to-end pruning when applied on Transformer models [61]. We would also
4,912
1908.10797
2971306187
Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded devices infeasible. Driven by this problem, many works have proposed a number of pruning methods to reduce the sizes of the RNN model. In this work, we propose an end-to-end pruning method for image captioning models equipped with visual attention. Our proposed method is able to achieve sparsity levels up to 97.5 without significant performance loss relative to the baseline (around 1 loss at 40x compression of GRU model). Our method is also simple to use and tune, facilitating faster development times for neural network practitioners. We perform extensive experiments on the popular MS-COCO dataset in order to empirically validate the efficacy of our proposed method.
While there are other works on compressing RNNs, most of the methods proposed either comes with structural constraints or are complementary to model pruning in principle. Examples include using low-rank matrix factorisations @cite_28 @cite_40 , product quantisation on embeddings @cite_14 , factorising word predictions into multiple time steps @cite_36 @cite_15 @cite_25 , and grouping RNNs @cite_56 . Lastly, another closely related work by @cite_18 also incorporated model pruning into image captioning. However we note three notable differences: 1) their work is focused on proposing a new LSTM cell structure named the ; 2) their work utilises the grow-and-prune (GP) method @cite_47 which necessitates compute and time expensive iterative pruning; and 3) the compression figures stated are calculated based on the size of the LSTM cells instead of the entire decoder.
{ "abstract": [ "Long short-term memory (LSTM) has been widely used for sequential data modeling. Researchers have increased LSTM depth by stacking LSTM cells to improve performance. This incurs model redundancy, increases run-time delay, and makes the LSTMs more prone to overfitting. To address these problems, we propose a hidden-layer LSTM (H-LSTM) that adds hidden layers to LSTM's original one level non-linear control gates. H-LSTM increases accuracy while employing fewer external stacked layers, thus reducing the number of parameters and run-time latency significantly. We employ grow-and-prune (GP) training to iteratively adjust the hidden layers through gradient-based growth and magnitude-based pruning of connections. This learns both the weights and the compact architecture of H-LSTM control gates. We have GP-trained H-LSTMs for image captioning and speech recognition applications. For the NeuralTalk architecture on the MSCOCO dataset, our three models reduce the number of parameters by 38.7x [floating-point operations (FLOPs) by 45.5x], run-time latency by 4.5x, and improve the CIDEr score by 2.6. For the DeepSpeech2 architecture on the AN4 dataset, our two models reduce the number of parameters by 19.4x (FLOPs by 23.5x), run-time latency by 15.7 , and the word error rate from 12.9 to 8.7 . Thus, GP-trained H-LSTMs can be seen to be compact, fast, and accurate.", "", "", "Recurrent neural networks (RNNs), including long short-term memory (LSTM) RNNs, have produced state-of-the-art results on a variety of speech recognition tasks. However, these models are often too large in size for deployment on mobile devices with memory and latency constraints. In this work, we study mechanisms for learning compact RNNs and LSTMs via low-rank factorizations and parameter sharing schemes. Our goal is to investigate redundancies in recurrent architectures where compression can be admitted without losing performance. A hybrid strategy of using structured matrices in the bottom layers and shared low-rank factors on the top layers is found to be particularly effective, reducing the parameters of a standard LSTM by 75 , at a small cost of 0.3 increase in WER, on a 2,000-hr English Voice Search task.", "Recurrent neural networks (RNNs) have achieved state-of-the-art performances in many natural language processing tasks, such as language modeling and machine translation. However, when the vocabulary is large, the RNN model will become very big (e.g., possibly beyond the memory capacity of a GPU device) and its training will become very inefficient. In this work, we propose a novel technique to tackle this challenge. The key idea is to use 2-Component (2C) shared embedding for word representations. We allocate every word in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector. Depending on its position in the table, a word is jointly represented by two components: a row vector and a column vector. Since the words in the same row share the row vector and the words in the same column share the column vector, we only need @math vectors to represent a vocabulary of @math unique words, which are far less than the @math vectors required by existing approaches. Based on the 2-Component shared embedding, we design a new RNN algorithm and evaluate it using the language modeling task on several benchmark datasets. The results show that our algorithm significantly reduces the model size and speeds up the training process, without sacrifice of accuracy (it achieves similar, if not better, perplexity as compared to state-of-the-art language models). Remarkably, on the One-Billion-Word benchmark Dataset, our algorithm achieves comparable perplexity to previous language models, whilst reducing the model size by a factor of 40-100, and speeding up the training process by a factor of 2. We name our proposed algorithm to reflect its very small model size and very high training speed.", "", "This paper develops the FastRNN and FastGRNN algorithms to address the twin RNN limitations of inaccurate training and inefficient prediction. Previous approaches have improved accuracy at the expense of increased prediction costs making them infeasible for resource-constrained and real-time applications. Unitary RNNs have increased accuracy somewhat by restricting the range of the state transition matrix's singular values but have also increased the model size as they required a larger number of hidden units to make up for the loss in expressive power. Gated RNNs have obtained state-of-the-art accuracies by adding extra parameters thereby resulting in even larger models. FastRNN addresses these limitations by developing a leaky integrator unit inspired peephole connection that does not constrain the range of the singular values explicitly and has only two extra scalar parameters. FastGRNN then extends the peephole to a gated architecture by reusing the RNN matrices in the gate to match state-of-the-art accuracies but with a 2-4x smaller model as compared to other gated architectures and with almost no overheads over a standard RNN. Further compression could be achieved by allowing FastGRNN's matrices to be low-rank, sparse and quantized without a significant loss in accuracy. Experiments on multiple benchmark datasets revealed that FastGRNN could make more accurate predictions with up to a 35x smaller model as compared to leading unitary and gated RNN techniques. FastGRNN's code can be publicly downloaded from .", "Automatically describing the contents of an image is one of the fundamental problems in artificial intelligence. Recent research has primarily focussed on improving the quality of the generated descriptions. It is possible to construct multiple architectures that achieve equivalent performance for the same task. Among these, the smaller architecture is desirable as they require less communication across servers during distributed training and less bandwidth to export a new model from one place to another through a network. Generally, a deep learning architecture for image captioning consists of a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) clubbed together within an encoder-decoder framework. We propose to combine a significantly smaller CNN architecture termed SqueezeNet and a memory and computation efficient LightRNN within a visual attention framework. Experimental evaluation of the proposed architecture on Flickr8k, Flickr30k and MS-COCO datasets reveal superior result when compared to the state of the art.", "" ], "cite_N": [ "@cite_18", "@cite_14", "@cite_47", "@cite_28", "@cite_36", "@cite_56", "@cite_40", "@cite_15", "@cite_25" ], "mid": [ "2806862281", "2889084656", "", "2963991999", "2546915671", "2803417661", "2962840019", "2802985683", "2920116436" ] }
Image Captioning with Sparse Recurrent Neural Network
Automatically generating a caption that describes an image, a problem known as image captioning, is a challenging problem where computer vision (CV) meets natural language processing (NLP). A well performing model not only has to identify the objects in the image, but also capture the semantic relationship between them, general context and the activities that they are involved in. Lastly, the model has to map the visual representation into a fully-formed sentence in a natural language such as English. A good image captioning model can have many useful applications, which include helping the visually impaired to better understand the web contents, providing descriptive annotations of website contents, and enabling better context-based image retrieval by tagging images with accurate natural language descriptions. Driven by user privacy concerns and the quest for lower user-perceived latency, deployment on edge devices away from remote servers is required. As edge devices usually have limited battery capacity and thermal limits, this presents a few key challenges in the form of storage size, power consumption and computational demands [1]. For models incorporating RNNs, on-device inference is often memory bandwidth-bound. As RNN parameters are fixed at every time step, parameter reading forms the bulk of the work [2,1]. As such, RNN pruning offers the opportunity to not only reduce the amount of memory access but also fitting the model in on-chip SRAM cache rather than off-chip DRAM memory, both of which dramatically reduce power consumption [3,4]. Similarly, sparsity patterns for pruned RNNs are fixed across time steps. This offers the potential to factorise scheduling and load balancing operations outside of the loop and enable reuse [2]. Lastly, pruning allows larger RNNs to be stored in memory and trained [2,5] In this work, we propose a one-shot end-to-end pruning method to produce very sparse image captioning decoders (up to 97.5% sparsity) while maintaining good performance relative to the dense baseline model as well as competing methods. We detail our contributions in the following section (Sec. 2.2). Model pruning Modern neural networks that provide good performance tend to be large and overparameterised, fuelled by observations that larger [6,7,8] networks tend to be easier to train. This in turn drives numerous efforts to reduce model size using techniques such as weight pruning and quantisation [9,10,11]. Early works like [12] and [13] explored pruning by computing the Hessian of the loss with respect to the parameters in order to assess the saliency of each parameter. Other works involving saliency computation include [14] and [15] where sensitivity of the loss with respect to neurons and weights are used respectively. On the other hand, works such as [16,17] directly induce network sparsity by incorporating sparsity-enforcing penalty terms into the loss function. Most of the recent works in network pruning focused on vision-centric classification tasks using Convolutional Neural Networks (CNNs) and occasionally RNNs. Techniques proposed include magnitude-based pruning [3,4,18] and variational pruning [19,20,21]. Among these, magnitude-based weight pruning have become popular due to their effectiveness and simplicity. Most notably, [3] employed a combination of pruning, quantization and Huffman encoding resulting in massive reductions in model size without affecting accuracy. While unstructured sparse connectivity provides reduction in storage size, it requires sparse General Matrix-Matrix Multiply (GEMM) libraries such as cuSPARSE and SPBLAS in order to achieve accelerated inference. Motivated by existing hardware architectures optimised for dense linear algebra, many works propose techniques to prune and induce sparsity in a structured way in which entire filters are removed [22,23,24]. On the other hand, works extending connection pruning to RNN networks are considerably fewer [25,2,1,26]. See et al. [25] first explored magnitude-based pruning applied to deep multi-layer neural machine translation (NMT) model with Long-Short Term Memory (LSTM) [27]. In their work, three pruning schemes are evaluated which include class-blind, class-uniform and class-distribution. Class-blind pruning was found to produce the best result compared to the other two schemes. Narang et al. [2] introduced a gradual magnitude-based pruning scheme for speech recognition RNNs whereby all the weights in a layer less than some chosen threshold are pruned. Gradual pruning is performed in parallel with network training while pruning rate is controlled by a slope function with two distinct phases. This is extended by Zhu and Gupta [1] who simplified the gradual pruning scheme with reduced hyperparameters. Our contribution Our proposed end-to-end pruning method possesses three main qualities: i) Simple and fast. Our approach enables easy pruning of the RNN decoder equipped with visual attention, whereby the best number of weights to prune in each layer is automatically determined. Compared to works such as [1,2], our approach is simpler with 1 to 2 hyperparameters versus 3 to 4 hyperparameters. Our method also does not rely on reinforcement learning techniques such as in the work of [28]. Moreover, our method applies pruning to all the weights in the RNN decoder and does not require special considerations to exclude pruning from certain weight classes. Lastly our method completes pruning in a single-shot process rather than requiring iterative train-and-prune process as in [29,30,31,32]. ii) Good performance-to-sparsity ratio enabling very high sparsity. Our approach achieves good performance across sparsity levels from 80% up until 97.5% (40× reduction in Number of Non-zeros (NNZ) parameters). This is in contrast with competing methods [1,25] where there is a significant performance drop-off starting at sparsity level of 90%. iii) Easily tunable sparsity level. Our approach provides a way for neural network practitioners to easily control the level of sparsity and compression desired. This allows for model solutions that are tailored for each particular scenario. In contrast, while the closely related works of [33,34] also provide good performance with the incorporation of gating variables, there is not a straightforward way of controlling the final sparsity level. In their works, regularisers such as bi-modal, l 2 , l 1 and l 0 regulariser are used to encourage network sparsity. Their work also only focuses on image classification using CNNs. While there are other works on compressing RNNs, most of the methods proposed either comes with structural constraints or are complementary to model pruning in principle. Examples include using low-rank matrix factorisations [35,36], product quantisation on embeddings [37], factorising word predictions into multiple time steps [38,39,40], and grouping RNNs [41]. Lastly, another closely related work by [30] also incorporated model pruning into image captioning. However we note three notable differences: 1) their work is focused on proposing a new LSTM cell structure named the H-LSTM; 2) their work utilises the grow-and-prune (GP) method [31] which necessitates compute and time expensive iterative pruning; and 3) the compression figures stated are calculated based on the size of the LSTM cells instead of the entire decoder. Proposed Method Our proposed method involves incorporating learnable gating parameters into regular image captioning framework. We denote weight, bias and gating matrices as W , B and G respectively. For a model with L layers, the captioning and gating parameters are denoted as θ and φ such that θ = {W 1:L , B 1:L } and φ = {G 1:L }. As there are substantial existing works focusing on pruning CNNs, we focus our efforts on pruning generative RNNs. As such, we only prune the RNN decoder. All model size calculations in this work include only the decoder (including attention module) while the encoder (i.e. CNN) is excluded. Image captioning with visual attention Our image captioning framework of interest is a simplified variant of the Show, Attend and Tell [42] model which uses a single layer RNN network equipped with visual attention on the CNN feature map. It is a popular framework that forms the basis for subsequent state-of-the-art (SOTA) works on image captioning [43,44]. In this work, we employ LSTM and Gated Recurrent Unit (GRU) [45] as the RNN cell. Suppose {S 0 , ... , S T −1 } is a sequence of words in a sentence of length T , the model directly maximises the probability of the correct description given an image I using the following formulation: Detailed explanation for (a) is given in Sec. 5.6. In (b), "Weighted annealed loss" refers to λ s L s in Eq. 14 while "Loss" refers to L s before applying cosine annealing in Eq. 11. log p (S | I) = T t = 0 log p (S t | I, S 0 : t−1 , c t )(1) where t is the time step, p (S t | I, S 0 : t−1 , c t ) is the probability of generating a word given an image I, previous words S 0 : t−1 , and context vector c t . For a RNN network with r units, the hidden state of RNN is initialised with the image embedding vector as follows: h t=−1 = W I I embed , m t=−1 = 0 (2) where W I ∈ R r×h is a weight matrix and h is the size of I embed . The attention function used in this work is soft-attention introduced by [46] and used in [42], where a multilayer perceptron (MLP) with a single hidden layer is employed to calculate the attention weights on a particular feature map. The context vector c t is then concatenated with previous predicted word embedding to serve as input to the RNN. Finally, a probability distribution over the vocabulary is produced from the hidden state h t : p t = Softmax (E o h t ) (3) h t , m t = RNN (x t , h t−1 , m t−1 ) (4) x t = [E w S t−1 , c t ] (5) c t = SoftAtt (f )(6) where E w ∈ R q×v and E o ∈ R v×r are input and output embedding matrices respectively; p t is the probability distribution over the vocabulary V ; m t is the memory state; x t is the current input; S t−1 ∈ R q is the one-hot vector of previous word; c t ∈ R a is the context vector; f is the CNN feature map; and [ , ] is the concatenation operator. For GRU, all m t terms are ignored. Finally, the standard cross-entropy loss function for the captioning model θ is given by: L c = − T t log p t (S t ) + λ d θ 2 2 (7) End-to-end pruning Formulation. Similar to [1], TensorFlow framework is extended to prune network connections during training. Inspired by the concept of learnable Supermasks introduced by [33,47], our proposed method achieves model pruning via learnable gating variables that are trained in an end-to-end fashion. An overview of our method is illustrated in Fig. 1. For every weight variable matrix W to be pruned, we create a gating variable matrix G with the same shape as W . This gating matrix G functions as a masking mechanism that determines which of the parameter w in the weight matrix W participates in both forward-execution and back-propagation of the graph. To achieve this masking effect, we calculate the effective weight tensor as follows: W l = W l G b l (8) G b l = z(σ(G l ))(9) where W l , G l ∈ R D are the original weight and gating matrices from layer l with shape D; and superscript (·) b indicates binary sampled variables. is element-wise multiplication; σ(·) is a point-wise function that transforms continuous values into the interval (0, 1); and z(·) is a point-wise function that samples from a Bernoulli distribution. The composite function z(σ(·)) thus effectively transforms continuous values into binary values. Binary gating matrices G b can be obtained by treating σ(G) as Bernoulli random variables. While there are many possible choices for the σ function, we decided to use the logistic sigmoid function following [48] and [47]. To sample from the Bernoulli distribution, we can either perform a unbiased draw or a maximum-likelihood (ML) draw [33]. Unbiased draw is the usual sampling process where a gating value g ∈ (0, 1) is binarised to 1.0 with probability g and 0.0 otherwise, whereas ML draw involves thresholding the value g at 0.5. In this work, we denote unbiased and ML draw using the sampling functions z(·) = Bern(·) and z(·) = Round(·) respectively. We back-propagate through both sampling functions using the straight-through estimator [48] (i.e. δ z(g)/δg = 1). Prior to training, all the gating variables are initialised to the same constant value m while the weights and biases of the network are initialised using standard initialisation schemes (e.g. Xavier [49]). During training, both sampling functions Bern(·) and Round(·) are used in different ways. To obtain the effective weight tensor used to generate network activations, we utilised Bern(·) to inject some stochasticity that helps with training and to mitigate the bias arising from the constant value initialisation. Thus the effective weight calculation becomes: W l = W l Bern(σ(G l ))(10) To drive the sparsity level of gating variables φ to the user-specified level s target , we introduce a regularisation term L s . Consistent with the observations in the works of [1] and [32], we found that annealing the loss over the course of training produces the best result. Annealing is done using a cosine curve α defined in Eq. 12. To ensure determinism when calculating sparsity, we use Round(·) to sample from σ(G): L s = (1 − α) × s target − 1 − p nnz p total (11) α = 1 2 1 + cos nπ n max (12) p nnz = L l = 0 J j = 0 Round(σ(g j,l ))(13) where p nnz is the number of NNZ gating parameters; p total is the total number of gating parameters; n and n max is the current and final training step respectively; g j,l is the gating parameter at position j in the matrix G l from layer l; L is the number of layers; and J is the number of parameters in matrix G l . The progression of sparsity loss L s as well as the sparsity levels of various layers in the decoder are illustrated in Fig. 2b and 2a respectively. The final objective function used to train the captioning model θ with gating variables φ is: L ( I, S, s target ) = L c + λ s L s(14) Intuitively, the captioning loss term L c provides supervision for learning of the saliency of each parameter where important parameters are retained with higher probability while unimportant ones are dropped more frequently. On the other hand, the sparsity regularisation term L s pushes down the average value of the Bernoulli gating parameters so that most of them have a value less than 0.5 after sigmoid activation. The hyperparameter λ s determines the weightage of L s . If λ s is too low, the target sparsity level might not be attained; whereas high values might slightly affect performance (see Sec. 5.1). Training and Inference. The training process of the captioning model is divided into two distinct stages: decoder training and end-to-end fine-tuning. During the decoder training stage, we freeze the CNN parameters and only learn decoder and gating parameters by optimising the loss given in Eq. 14. For the fine-tuning stage, we restore all the parameters θ and φ from the last checkpoint at the end of decoder training and optimise the entire model including the CNN. During this stage, Bern(·) is still used but all φ parameters are frozen. After training is completed, all the weight matrices W 1:L is transformed into sparse matrices by sampling from G 1:L using Round(·), after which G can be discarded. In other words, the final weights W f are calculated as: W f l = W l Round(σ(G l ))(15) Experiment Setup Unless stated otherwise, all experiments have the following configurations. We did not perform extensive hyperparameter search due to limited resources. Hyperparameters Models are implemented using TensorFlow r1.9. The image encoder used in this work is GoogLeNet (InceptionV1) with batch normalisation [50,51] pre-trained on ImageNet [52]. The input images are resized to 256 × 256, then randomly flipped and cropped to 224 × 224 before being fed to the CNN. The attention function SoftAtt(·) operates on the Mixed-4f map f ∈ R 196×832 . The size of context vector c t and attention MLP is set to a = 512. A single layer LSTM or GRU network with hidden state size of r = 512 is used. The word size is set to q = 256 dimensions. The optimiser used for decoder training is Adam [53], with batch size of 32. The initial learning rate (LR) is set to 1 × 10 −2 , and annealed using the cosine curve α defined in Eq. 12, ending at 1 × 10 −5 . All models are trained for 30 epochs. Weight decay rate is set to λ d = 1 × 10 −5 . For fine-tuning, a smaller initial LR of 1 × 10 −3 is used and the entire model is trained for 10 epochs. Captioning model parameters are initialised randomly using Xavier uniform initialisation [49]. The input and output dropout rates for dense RNN are both set to 0.35, while the attention map dropout rate is set to 0.1. Following [4,2], a lower dropout rate is used for sparse networks where RNN and attention dropout rates are set to 0.11 and 0.03 respectively. This is done to account for the reduced capacity of the sparse models. For fair comparison, we apply pruning to all weights of the captioning model for all of the pruning schemes. For our proposed method, we train the gating variables φ with a higher constant LR of 100 without annealing, which is consistent with [47]. We found that LR lower than 100 causes φ to train too slowly. We set λ s according to this heuristic: λ s = max(5, 0.5/(1 − s target )). All gating parameters φ are initialised to a constant m = 5.0, see Sec. 5.1 for other values. For gradual pruning [1], pruning is started after first epoch is completed and ended at the end of epoch 15, following the general heuristics outlined in [2]. Pruning frequency is set to 1000. We use the standard scheme where each layer is pruned to the same pruning ratio at every step. For hard pruning [25], pruning is applied to the dense baseline model after training is completed. Retraining is then performed for 10 epochs. LR and annealing schedule are the same as used for dense baseline. For inference, beam search is used in order to better approximate S = arg max S p(S | I). Beam size is set to b = 3 with no length normalisation. We evaluate the last checkpoint upon completion of training for all the experiments. We denote compression ratio as CR. Dataset The experiments are performed on the popular MS-COCO dataset [54]. It is a public English captioning dataset which contains 123, 287 images and each image is given at least 5 captions by different Amazon Mechanical Turk (AMT) workers. As there is no official test split with annotations available, the publicly available split 4 in the work of [55] is used in this work. The split assigns 5, 000 images for validation, another 5, 000 for testing and the rest for training. We reuse the publicly available tokenised captions. Words that occur less than 5 times are filtered out and sentences longer than 20 words are truncated. All the scores are obtained using the publicly available MS-COCO evaluation toolkit 5 , which computes BLEU [56], METEOR [57], ROUGE-L [58], CIDEr [59] and SPICE [60]. For sake of brevity, we label BLEU-1 to BLEU-4 as B-1 to B-4, and METEOR, ROUGE-L, CIDEr, SPICE as M, R, C, S respectively. Table 1 shows the effect of various gating initialisation values. From the table, we can see that the best overall performance is achieved when m is set to 5. Starting the gating parameters at a value of 5 allows all the captioning parameters θ to be retained with high probability at the early stages of training, allowing better convergence. This observation is also consistent with the works of [1] and [32], where the authors found that gradual pruning and late resetting can lead to better model performance. Thus, we recommend setting m = 5.0. Table 2 shows the effect of sparsity regularisation weightage λ s . This is the important hyperparameter that could affect the final sparsity level at convergence. From the results, we can see that low values lead to insufficient sparsity, and higher sparsity target s target requires higher λ s . For image captioning on MS-COCO, we empirically determined that the heuristic given in Sec. 4.1 works sufficiently well for sparsity levels from 80% to 97.5% (see Table 3 and 4. Experiments and Discussion Ablation study Comparison with RNN pruning methods In this section, we provide extensive comparisons of our proposed method with the dense baselines as well as competing methods at multiple sparsity levels. All the models have been verified to have achieved the targeted sparsity levels. From Table 3 and 4, we can clearly see that our proposed end-to-end pruning provides good performance when compared to the dense baselines. This is true even at high pruning ratios of 90% and 95%. The relative drops in BLEU-4 and CIDEr scores are only −1.0% to −2.9% and −1.3% to −2.9% while having 10 − 20× fewer NNZ parameters. This is in contrast with competing methods whose performance drops are double or even triple compared to ours, especially for LSTM. The performance advantage provided by end-to-end pruning is even more apparent at the high pruning ratio of 97.5%, offering a big 40× reduction in NNZ parameters. Even though we suffered relative degradations of −4.8% to −6.4% in BLEU-4 and CIDEr scores compared to baselines, our performance is still significantly better than the next-closest method which is gradual pruning. On the other hand, the performance achieved by our 80% pruned models are extremely close to that of baselines. Our sparse LSTM model even very slightly outperforms the baseline on some metrics, although we note that the standard deviation for CIDEr score across training runs is around 0.3 to 0.9. Among the competing methods, we can see that gradual pruning usually outperforms hard pruning, especially at high sparsities of 95% and 97.5%. That being said, we can see that class- Table 4: Comparison with dense GRU baseline and competing methods. Bold text indicates best overall performance. "Gradual" and "Hard" denote methods proposed in [1] and [25]. blind hard pruning is able to produce good results at moderate pruning rates of 80% and 90%, even outperforming gradual pruning. This is especially true for the GRU captioning model where it outperforms all other methods briefly at 90% sparsity, however we note that its performance on LSTM is generally lower. In contrast, our proposed approach achieves good performance on both LSTM and GRU models. All in all, these results showcase the strength of our proposed method. Across pruning ratios from 80% to 97.5%, our approach consistently maintain relatively good performance when compared to the dense baselines while outperforming magnitude-based gradual and hard pruning methods in most cases. Effect of fine-tuning In this section, we investigate the potential impact of fine-tuning the entire captioning model in an end-to-end manner. From Table 5, we can see that model fine-tuning has a performancerecovering effect on the sparse models. This phenomenon is especially apparent on very sparse models with sparsity at 97.5%. On both LSTM and GRU models, the drops in performance suffered due to pruning have mostly reduced except for LSTM at 80% sparsity. Notably, all the pruned models have remarkably similar performance from 80% sparsity up until 97.5%. The score gap between dense and sparse GRU models are exceedingly small, ranging from +1.2% to −1.9% for both BLEU-4 and CIDEr. For LSTM models, even though the score gap is slightly larger at −0.9% to −2.5% on both BLEU-4 and CIDEr, it is still considerably smaller than without CNN fine-tuning (Table 3). These results suggest that the Inception-V1 CNN pre-trained on ImageNet is not optimised to provide useful features for sparse decoders. As such, end-to-end fine-tuning together with sparse decoder allows features extracted by the CNN to be adapted where useful semantic information can be propagated through surviving connections in the decoder. We also provided compression and performance comparison with the closely related work of [30] who utilised GP [31] method to produce sparse H-LSTM for image captioning. For fairness, we also provide scores obtained at CR of 40× using beam size of 2 instead of 3. From the table, we can see that at overall CR of 20× to 40×, our sparse models are able to outperform H-LSTM with lower NNZ parameters. This indicates that the effectiveness of our one-shot approach is at least comparable to the iterative process of grow-and-prune. Large-sparse versus small-dense In this section, we show that a large sparse LSTM image captioning model produced via endto-end pruning is able to outperform a smaller dense LSTM trained normally. The small-dense model denoted as LSTM-S has a word embedding size of q = 64 dimensions, LSTM size of r = 128 units and finally attention MLP size of a = 96 units. The results are given in Table 6. From the results, we can see that the small-dense model with 5× fewer parameters performs considerably worse than all the large-sparse models LSTM-M across the board. Notably, we can see that the large-sparse LSTM-M model with 40× fewer NNZ parameters still managed to outperform LSTM-S with a considerable margin. At equal NNZ parameters, the large-sparse model comfortably outperforms the small-dense model. This showcases further the strength of model pruning and solidifies the observations made in works on RNN pruning [2, 1]. Caption uniqueness and length In this section, we explore the potential effects of our proposed end-to-end pruning on the uniqueness and length of the generated captions. As pruning reduces the complexity and capacity of the decoder considerably, we wish to see if the sparse models show any signs of training data memorisation and hence potentially overfitting. In such cases, uniqueness of the generated captions would decrease as the decoder learns to simply repeat captions available in the training set. A generated caption is considered to be unique if it is not found in the training set. From Table 7, we can see that despite the heavy reductions in NNZ parameters, the uniqueness of generated captions have not decreased. On the contrary, more unseen captions are being generated at higher levels of sparsity and compression. On the other hand, we can see that the average lengths of generated captions peaked at 80% sparsity in most cases and then decrease slightly as sparsity increase. That being said, the reductions in caption length are minimal (+0.5% to −2.3%) considering the substantial decoder compression rates of up to 40×. Together with the good performance shown in Table 3 and 4, these results indicate that our approach is able to maintain both the variability of generated captions and their quality as measured by the metric scores. Layer-wise sparsity comparison Finally, we visualise the pruning ratio of each decoder layers when pruned using the different methods listed in Sec. 5.2. Among the approaches, both gradual and class-uniform pruning produces the same sparsity level across all the layers. To better showcase the differences in layer-wise pruning ratios, we decided to visualise two opposite ends in which the first has a relatively moderate sparsity of 80% while the other has a high sparsity of 97.5%. In both Fig. 3a and 3b, we denote the decoder layers as follows: "RNN initial state" refers to W I in Eq. 2; "LSTM kernel" is the concatenation of all gate kernels in LSTM (i.e. input, output, forget, cell); "Key", "Value" and "Query" layers refer to projection layers in the attention module (see [61] for details); "Attention MLP" is the second layer of the 2-layer attention MLP; and finally "Word" and "Logits" refer to the word embedding matrix E w in Eq. 5 and E o in Eq. 3 respectively. From the figures, we can see that our proposed pruning method consistently prune "attention MLP" layer the least. This is followed by "LSTM kernel" and "Value" layers where they generally receive lesser pruning compared to others. On the flip side, "Key" and "Query" layers were pruned most heavily at levels often exceeding the targeted pruning rates. Finally, "Word embedding" consistently receives more pruning than "Logits layer". This may indicate that there exists substantial information redundancy in the word embeddings matrix as noted in works such as [37,40,62]. Conclusion and Future Work In this work, we have investigated the effectiveness of model weight pruning on the task of image captioning with visual attention. In particular, we proposed an end-to-end pruning method that performs considerably better than competing methods at maintaining captioning performance while maximising compression rate. Our single-shot approach is simple and fast to use, provides good performance, and its sparsity level is easy to tune. Moreover, we have demonstrated by pruning decoder weights during training, we can find sparse models that performs better than dense counterparts while significantly reducing model size. Our results pave the way towards deployment on mobile and embedded devices due to their small size and reduced memory requirements. In the future, we wish to investigate the generalisation capability of end-to-end pruning when applied on Transformer models [61]. We would also
4,912
1908.10084
2971193649
BERT (, 2018) and RoBERTa (, 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations ( 65 hours) with BERT. The construction of BERT makes it unsuitable for semantic similarity search as well as for unsupervised tasks like clustering. In this publication, we present Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with BERT RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.
BERT @cite_19 is a pre-trained transformer network @cite_11 , which set for various NLP tasks new state-of-the-art results, including question answering, sentence classification, and sentence-pair regression. The input for BERT for sentence-pair regression consists of the two sentences, separated by a special [SEP] token. Multi-head attention over 12 (base-model) or 24 layers (large-model) is applied and the output is passed to a simple regression function to derive the final label. Using this setup, BERT set a new state-of-the-art performance on the Semantic Textual Semilarity (STS) benchmark @cite_18 . RoBERTa @cite_29 showed, that the performance of BERT can further improved by small adaptations to the pre-training process. We also tested XLNet @cite_22 , but it led in general to worse results than BERT.
{ "abstract": [ "Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).", "With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.", "Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (, 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.", "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature." ], "cite_N": [ "@cite_18", "@cite_22", "@cite_29", "@cite_19", "@cite_11" ], "mid": [ "2739351760", "2950813464", "2965373594", "2896457183", "2963403868" ] }
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
In this publication, we present Sentence-BERT (SBERT), a modification of the BERT network using siamese and triplet networks that is able to derive semantically meaningful sentence embeddings 2 . This enables BERT to be used for certain new tasks, which up-to-now were not applicable for BERT. These tasks include large-scale seman-tic similarity comparison, clustering, and information retrieval via semantic search. BERT set new state-of-the-art performance on various sentence classification and sentence-pair regression tasks. BERT uses a cross-encoder: Two sentences are passed to the transformer network and the target value is predicted. However, this setup is unsuitable for various pair regression tasks due to too many possible combinations. Finding in a collection of n = 10 000 sentences the pair with the highest similarity requires with BERT n·(n−1)/2 = 49 995 000 inference computations. On a modern V100 GPU, this requires about 65 hours. Similar, finding which of the over 40 million existent questions of Quora is the most similar for a new question could be modeled as a pair-wise comparison with BERT, however, answering a single query would require over 50 hours. A common method to address clustering and semantic search is to map each sentence to a vector space such that semantically similar sentences are close. Researchers have started to input individual sentences into BERT and to derive fixedsize sentence embeddings. The most commonly used approach is to average the BERT output layer (known as BERT embeddings) or by using the output of the first token (the [CLS] token). As we will show, this common practice yields rather bad sentence embeddings, often worse than averaging GloVe embeddings (Pennington et al., 2014). To alleviate this issue, we developed SBERT. The siamese network architecture enables that fixed-sized vectors for input sentences can be derived. Using a similarity measure like cosinesimilarity or Manhatten / Euclidean distance, semantically similar sentences can be found. These similarity measures can be performed extremely efficient on modern hardware, allowing SBERT to be used for semantic similarity search as well as for clustering. The complexity for finding the most similar sentence pair in a collection of 10,000 sentences is reduced from 65 hours with BERT to the computation of 10,000 sentence embeddings (~5 seconds with SBERT) and computing cosinesimilarity (~0.01 seconds). By using optimized index structures, finding the most similar Quora question can be reduced from 50 hours to a few milliseconds (Johnson et al., 2017). We fine-tune SBERT on NLI data, which creates sentence embeddings that significantly outperform other state-of-the-art sentence embedding methods like InferSent (Conneau et al., 2017) and Universal Sentence Encoder . On seven Semantic Textual Similarity (STS) tasks, SBERT achieves an improvement of 11.7 points compared to InferSent and 5.5 points compared to Universal Sentence Encoder. On SentEval (Conneau and Kiela, 2018), an evaluation toolkit for sentence embeddings, we achieve an improvement of 2.1 and 2.6 points, respectively. SBERT can be adapted to a specific task. It sets new state-of-the-art performance on a challenging argument similarity dataset (Misra et al., 2016) and on a triplet dataset to distinguish sentences from different sections of a Wikipedia article (Dor et al., 2018). The paper is structured in the following way: Section 3 presents SBERT, section 4 evaluates SBERT on common STS tasks and on the challenging Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Section 5 evaluates SBERT on SentEval. In section 6, we perform an ablation study to test some design aspect of SBERT. In section 7, we compare the computational efficiency of SBERT sentence embeddings in contrast to other state-of-the-art sentence embedding methods. Model SBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEANstrategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN. In order to fine-tune BERT / RoBERTa, we create siamese and triplet networks (Schroff et al., 2015) to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity. The network structure depends on the available training data. We experiment with the following structures and objective functions. Classification Objective Function. We concatenate the sentence embeddings u and v with the element-wise difference |u − v| and multiply it with the trainable weight W t ∈ R 3n×k : o = softmax(W t (u, v, |u − v|)) where n is the dimension of the sentence embeddings and k the number of labels. We optimize cross-entropy loss. This structure is depicted in Figure 1. Regression Objective Function. The cosinesimilarity between the two sentence embeddings u and v is computed ( Figure 2). We use meansquared-error loss as the objective function. Triplet Objective Function. Given an anchor sentence a, a positive sentence p, and a negative sentence n, triplet loss tunes the network such that the distance between a and p is smaller than the distance between a and n. Mathematically, we minimize the following loss function: max(||s a − s p || − ||s a − s n || + , 0) with s x the sentence embedding for a/n/p, || · || a distance metric and margin . Margin ensures that s p is at least closer to s a than s n . As metric we use Euclidean distance and we set = 1 in our experiments. Training Details We train SBERT on the combination of the SNLI (Bowman et al., 2015) and the Multi-Genre NLI (Williams et al., 2018) dataset. The SNLI is a collection of 570,000 sentence pairs annotated with the labels contradiction, eintailment, and neutral. MultiNLI contains 430,000 sentence pairs and covers a range of genres of spoken and written text. We fine-tune SBERT with a 3-way softmaxclassifier objective function for one epoch. We used a batch-size of 16, Adam optimizer with learning rate 2e−5, and a linear learning rate warm-up over 10% of the training data. Our default pooling strategy is MEAN. Evaluation -Semantic Textual Similarity We evaluate the performance of SBERT for common Semantic Textual Similarity (STS) tasks. State-of-the-art methods often learn a (complex) regression function that maps sentence embeddings to a similarity score. However, these regression functions work pair-wise and due to the combinatorial explosion those are often not scalable if the collection of sentences reaches a certain size. Instead, we always use cosine-similarity to compare the similarity between two sentence embeddings. We ran our experiments also with negative Manhatten and negative Euclidean distances as similarity measures, but the results for all approaches remained roughly the same. Unsupervised STS We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 -2016 (Agirre et al., 2012(Agirre et al., , 2013(Agirre et al., , 2014(Agirre et al., , 2015(Agirre et al., , 2016, the STS benchmark (Cer et al., 2017), and the SICK-Relatedness dataset (Marelli et al., 2014). These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in (Reimers et al., 2016) that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosinesimilarity. The results are depicted in Table 1. The results shows that directly using the output of BERT leads to rather poor performances. Averaging the BERT embeddings achieves an average correlation of only 54.81, and using the CLStoken output only achieves an average correlation of 29.19. Both are worse than computing average GloVe embeddings. Using the described siamese network structure and fine-tuning mechanism substantially improves the correlation, outperforming both InferSent and Universal Sentence Encoder substantially. The only dataset where SBERT performs worse than Universal Sentence Encoder is SICK-R. Universal Sentence Encoder was trained on various datasets, including news, question-answer pages and discussion forums, which appears to be more suitable to the data of SICK-R. In contrast, SBERT was pre-trained only on Wikipedia (via BERT) and on NLI data. While RoBERTa was able to improve the performance for several supervised tasks, we only observe minor difference between SBERT and SRoBERTa for generating sentence embeddings. Supervised STS The STS benchmark (STSb) (Cer et al., 2017) provides is a popular dataset to evaluate supervised STS systems. The data includes 8,628 sentence pairs from the three categories captions, news, and forums. It is divided into train (5,749), dev (1,500) and test (1,379). BERT set a new state-of-the-art performance on this dataset by passing both sentences to the network and using a simple regres-sion method for the output. BERT systems were trained with 10 random seeds and 4 epochs. SBERT was fine-tuned on the STSb dataset, SBERT-NLI was pretrained on the NLI datasets, then fine-tuned on the STSb dataset. We use the training set to fine-tune SBERT using the regression objective function. At prediction time, we compute the cosine-similarity between the sentence embeddings. All systems are trained with 10 random seeds to counter variances (Reimers and Gurevych, 2018). The results are depicted in Table 2. We experimented with two setups: Only training on STSb, and first training on NLI, then training on STSb. We observe that the later strategy leads to a slight improvement of 1-2 points. This two-step approach had an especially large impact for the BERT cross-encoder, which improved the performance by 3-4 points. We do not observe a significant difference between BERT and RoBERTa. Argument Facet Similarity We evaluate SBERT on the Argument Facet Similarity (AFS) corpus by Misra et al. (2016). The AFS corpus annotated 6,000 sentential argument pairs from social media dialogs on three controversial topics: gun control, gay marriage, and death penalty. The data was annotated on a scale from 0 ("different topic") to 5 ("completely equivalent"). The similarity notion in the AFS corpus is fairly different to the similarity notion in the STS datasets from SemEval. STS data is usually descriptive, while AFS data are argumentative excerpts from dialogs. To be considered similar, arguments must not only make similar claims, but also provide a similar reasoning. Further, the lexical gap between the sentences in AFS is much larger. Hence, simple unsupervised methods as well as state-of-the-art STS systems perform badly on this dataset (Reimers et al., 2019). We evaluate SBERT on this dataset in two scenarios: 1) As proposed by Misra et al., we evaluate SBERT using 10-fold cross-validation. A drawback of this evaluation setup is that it is not clear how well approaches generalize to different topics. Hence, 2) we evaluate SBERT in a cross-topic setup. Two topics serve for training and the approach is evaluated on the left-out topic. We repeat this for all three topics and average the results. SBERT is fine-tuned using the Regression Objective Function. The similarity score is computed using cosine-similarity based on the sentence embeddings. We also provide the Pearson correlation r to make the results comparable to Misra et al. However, we showed (Reimers et al., 2016) that Pearson correlation has some serious drawbacks and should be avoided for comparing STS systems. The results are depicted in Table 3. Unsupervised methods like tf-idf, average GloVe embeddings or InferSent perform rather badly on this dataset with low scores. Training SBERT in the 10-fold cross-validation setup gives a performance that is nearly on-par with BERT. However, in the cross-topic evaluation, we observe a performance drop of SBERT by about 7 points Spearman correlation. To be considered similar, arguments should address the same claims and provide the same reasoning. BERT is able to use attention to compare directly both sentences (e.g. word-by-word comparison), while SBERT must map individual sentences from an unseen topic to a vector space such that arguments with similar claims and reasons are close. This is a much more challenging task, which appears to require more than just two topics for training to work on-par with BERT. Wikipedia Sections Distinction Dor et al. (2018) Evaluation -SentEval SentEval (Conneau and Kiela, 2018) is a popular toolkit to evaluate the quality of sentence embeddings. Sentence embeddings are used as features for a logistic regression classifier. The logistic regression classifier is trained on various tasks in a 10-fold cross-validation setup and the prediction accuracy is computed for the test-fold. The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by Devlin et al. (2018) for new tasks is the more suitable method, as it updates all layers of the BERT network. However, SentEval can still give an impression on the quality of our sentence embeddings for various tasks. We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks: • MR: Sentiment prediction for movie reviews snippets on a five start scale (Pang and Lee, 2005). • CR: Sentiment prediction of customer product reviews (Hu and Liu, 2004). • SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries (Pang and Lee, 2004). • MPQA: Phrase level opinion polarity classification from newswire (Wiebe et al., 2005). • SST: Stanford Sentiment Treebank with binary labels (Socher et al., 2013). • TREC: Fine grained question-type classification from TREC (Li and Roth, 2002). • MRPC: Microsoft Research Paraphrase Corpus from parallel news sources (Dolan et al., 2004). The results can be found in Table 5. SBERT is able to achieve the best performance in 5 out of 7 tasks. The average performance increases by about 2 percentage points compared to In-ferSent as well as the Universal Sentence Encoder. Even though transfer learning is not the purpose of SBERT, it outperforms other state-of-the-art sentence embeddings methods on this task. Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation. It appears that the sentence embeddings from SBERT capture well sentiment information: We observe large improvements for all sentiment tasks (MR, CR, and SST) from SentEval in comparison to InferSent and Universal Sentence Encoder. The only dataset where SBERT is significantly worse than Universal Sentence Encoder is the TREC dataset. Universal Sentence Encoder was pre-trained on question-answering data, which appears to be beneficial for the question-type classification task of the TREC dataset. Average BERT embeddings or using the CLStoken output from a BERT network achieved bad results for various STS tasks (Table 1), worse than average GloVe embeddings. However, for Sent-Eval, average BERT embeddings and the BERT CLS-token output achieves decent results (Table 5), outperforming average GloVe embeddings. The reason for this are the different setups. For the STS tasks, we used cosine-similarity to estimate the similarities between sentence embeddings. Cosine-similarity treats all dimensions equally. In contrast, SentEval fits a logistic regression classifier to the sentence embeddings. This allows that certain dimensions can have higher or lower impact on the classification result. We conclude that average BERT embeddings / CLS-token output from BERT return sentence embeddings that are infeasible to be used with cosinesimilarity or with Manhatten / Euclidean distance. For transfer learning, they yield slightly worse results than InferSent or Universal Sentence Encoder. However, using the described fine-tuning setup with a siamese network structure on NLI datasets yields sentence embeddings that achieve a new state-of-the-art for the SentEval toolkit. Ablation Study We have demonstrated strong empirical results for the quality of SBERT sentence embeddings. In this section, we perform an ablation study of different aspects of SBERT in order to get a better understanding of their relative importance. We evaluated different pooling strategies (MEAN, MAX, and CLS). For the classification objective function, we evaluate different concatenation methods. For each possible configuration, we train SBERT with 10 different random seeds and average the performances. The objective function (classification vs. regression) depends on the annotated dataset. For the classification objective function, we train SBERTbase on the SNLI and the Multi-NLI dataset. For the regression objective function, we train on the training set of the STS benchmark dataset. Performances are measured on the development split of the STS benchmark dataset. Results are shown in Table 6. When trained with the classification objective function on NLI data, the pooling strategy has a rather minor impact. The impact of the concatenation mode is much larger. InferSent (Conneau et al., 2017) and Universal Sentence Encoder (Cer et al., 2018) both use (u, v, |u − v|, u * v) as input for a softmax classifier. However, in our architecture, adding the element-wise u * v decreased the performance. (u * v) 70.54 - (|u − v|, u * v) 78.37 - (u, v, u * v) 77.44 - (u, v, |u − v|) 80.78 - (u, v, |u − v|, u * v) 80.44 - The most important component is the elementwise difference |u − v|. Note, that the concatenation mode is only relevant for training the softmax classifier. At inference, when predicting similarities for the STS benchmark dataset, only the sentence embeddings u and v are used in combination with cosine-similarity. The element-wise difference measures the distance between the dimensions of the two sentence embeddings, ensuring that similar pairs are closer and dissimilar pairs are further apart. When trained with the regression objective function, we observe that the pooling strategy has a large impact. There, the MAX strategy perform significantly worse than MEAN or CLS-token strategy. This is in contrast to (Conneau et al., 2017), who found it beneficial for the BiLSTM-layer of InferSent to use MAX instead of MEAN pooling. Computational Efficiency Sentence embeddings need potentially be computed for Millions of sentences, hence, a high computation speed is desired. In this section, we compare SBERT to average GloVe embeddings, InferSent (Conneau et al., 2017), and Universal Sentence Encoder . For our comparison we use the sentences from the STS benchmark (Cer et al., 2017). We compute average GloVe embeddings using a simple for-loop with python dictionary lookups and NumPy. InferSent 4 is based on PyTorch. For Universal Sentence Encoder, we use the Tensor-Flow Hub version 5 , which is based on Tensor-Flow. SBERT is based on PyTorch. For improved computation of sentence embeddings, we implemented a smart batching strategy: Sentences with similar lengths are grouped together and are only padded to the longest element in a mini-batch. This drastically reduces computational overhead from padding tokens. Performances were measured on a server with Intel i7-5820K CPU @ 3.30GHz, Nvidia Tesla 4 https://github.com/facebookresearch/ InferSent 5 https://tfhub.dev/google/ universal-sentence-encoder-large/3 V100 GPU, CUDA 9.2 and cuDNN. The results are depicted in Table 7. On CPU, InferSent is about 65% faster than SBERT. This is due to the much simpler network architecture. InferSent uses a single Bi-LSTM layer, while BERT uses 12 stacked transformer layers. However, an advantage of transformer networks is the computational efficiency on GPUs. There, SBERT with smart batching is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. Smart batching achieves a speed-up of 89% on CPU and 48% on GPU. Average GloVe embeddings is obviously by a large margin the fastest method to compute sentence embeddings. Conclusion We showed that BERT out-of-the-box maps sentences to a vector space that is rather unsuitable to be used with common similarity measures like cosine-similarity. The performance for seven STS tasks was below the performance of average GloVe embeddings. To overcome this shortcoming, we presented Sentence-BERT (SBERT). SBERT fine-tunes BERT in a siamese / triplet network architecture. We evaluated the quality on various common benchmarks, where it could achieve a significant improvement over state-of-the-art sentence embeddings methods. Replacing BERT with RoBERTa did not yield a significant improvement in our experiments. SBERT is computationally efficient. On a GPU, it is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. SBERT can be used for tasks which are computationally not feasible to be modeled with BERT. For example, clustering of 10,000 sentences with hierarchical clustering requires with BERT about 65 hours, as around 50 Million sentence combinations must be computed. With SBERT, we were able to reduce the effort to about 5 seconds.
3,351
1908.10155
2970282336
Video action recognition, which is topical in computer vision and video analysis, aims to allocate a short video clip to a pre-defined category such as brushing hair or climbing stairs. Recent works focus on action recognition with deep neural networks that achieve state-of-the-art results in need of high-performance platforms. Despite the fast development of mobile computing, video action recognition on mobile devices has not been fully discussed. In this paper, we focus on the novel mobile video action recognition task, where only the computational capabilities of mobile devices are accessible. Instead of raw videos with huge storage, we choose to extract multiple modalities (including I-frames, motion vectors, and residuals) directly from compressed videos. By employing MobileNetV2 as backbone, we propose a novel Temporal Trilinear Pooling (TTP) module to fuse the multiple modalities for mobile video action recognition. In addition to motion vectors, we also provide a temporal fusion method to explicitly induce the temporal context. The efficiency test on a mobile device indicates that our model can perform mobile video action recognition at about 40FPS. The comparative results on two benchmarks show that our model outperforms existing action recognition methods in model size and time consuming, but with competitive accuracy.
Pooling methods are requisite either in two-stream networks @cite_32 @cite_33 or in other feature fusion models. @cite_9 simply uses average pooling and outperforms others. @cite_28 proposes bilinear pooling to model local parts of object: two feature representations are learned separately and then multiplied using the outer product to obtain the holistic representation. @cite_23 combines two-stream network with a compact bilinear representation @cite_4 . @cite_30 defines a general kernel-based pooling framework which captures higher-order interactions of features. However, most existing bilinear pooling models are capable to combine only two features, and none of their variants could cope with more than two features, which is needed in video action recognition.
{ "abstract": [ "Convolutional Neural Networks (CNNs) with Bilinear Pooling, initially in their full form and later using compact representations, have yielded impressive performance gains on a wide range of visual tasks, including fine-grained visual categorization, visual question answering, face recognition, and description of texture and style. The key to their success lies in the spatially invariant modeling of pairwise (2nd order) feature interactions. In this work, we propose a general pooling framework that captures higher order interactions of features in the form of kernels. We demonstrate how to approximate kernels such as Gaussian RBF up to a given order using compact explicit feature maps in a parameter-free manner. Combined with CNNs, the composition of the kernel can be learned from data in an end-to-end fashion via error back-propagation. The proposed kernel pooling scheme is evaluated in terms of both kernel approximation error and visual recognition accuracy. Experimental evaluations demonstrate state-of-the-art performance on commonly used fine-grained recognition datasets.", "Bilinear models has been shown to achieve impressive performance on a wide range of visual tasks, such as semantic segmentation, fine grained recognition and face recognition. However, bilinear features are high dimensional, typically on the order of hundreds of thousands to a few million, which makes them impractical for subsequent analysis. We propose two compact bilinear representations with the same discriminative power as the full bilinear representation but with only a few thousand dimensions. Our compact representations allow back-propagation of classification errors enabling an end-to-end optimization of the visual recognition system. The compact bilinear representations are derived through a novel kernelized analysis of bilinear pooling which provide insights into the discriminative power of bilinear pooling, and a platform for further research in compact pooling methods. Experimentation illustrate the utility of the proposed representations for image classification and few-shot learning across several datasets.", "", "We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor. This architecture can model local pairwise feature interactions in a translationally invariant manner which is particularly useful for fine-grained categorization. It also generalizes various orderless texture descriptors such as the Fisher vector, VLAD and O2P. We present experiments with bilinear models where the feature extractors are based on convolutional neural networks. The bilinear form simplifies gradient computation and allows end-to-end training of both networks using image labels only. Using networks initialized from the ImageNet dataset followed by domain specific fine-tuning we obtain 84.1 accuracy of the CUB-200-2011 dataset requiring only category labels at training time. We present experiments and visualizations that analyze the effects of fine-tuning and the choice two networks on the speed and accuracy of the models. Results show that the architecture compares favorably to the existing state of the art on a number of fine-grained datasets while being substantially simpler and easier to train. Moreover, our most accurate model is fairly efficient running at 8 frames sec on a NVIDIA Tesla K40 GPU. The source code for the complete system will be made available at http: vis-www.cs.umass.edu bcnn.", "Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "Two-stream convolutional networks have shown strong performance in video action recognition tasks. The key idea is to learn spatiotemporal features by fusing convolutional networks spatially and temporally. However, it remains unclear how to model the correlations between the spatial and temporal structures at multiple abstraction levels. First, the spatial stream tends to fail if two videos share similar backgrounds. Second, the temporal stream may be fooled if two actions resemble in short snippets, though appear to be distinct in the long term. We propose a novel spatiotemporal pyramid network to fuse the spatial and temporal features in a pyramid structure such that they can reinforce each other. From the architecture perspective, our network constitutes hierarchical fusion strategies which can be trained as a whole using a unified spatiotemporal loss. A series of ablation experiments support the importance of each fusion strategy. From the technical perspective, we introduce the spatiotemporal compact bilinear operator into video analysis tasks. This operator enables efficient training of bilinear fusion operations which can capture full interactions between the spatial and temporal features. Our final network achieves state-of-the-art results on standard video datasets." ], "cite_N": [ "@cite_30", "@cite_4", "@cite_33", "@cite_28", "@cite_9", "@cite_32", "@cite_23" ], "mid": [ "2740620254", "2963066927", "", "2104657103", "2507009361", "2156303437", "2736596806" ] }
Mobile Video Action Recognition
Video analysis has drawn increasing attention from the computer vision community, for that videos occupy more than 75% of the global IP traffic [11]. With the development of deep learning methods, recent works achieve promising performance on video analysis tasks, including action recognition [6,21,24,28,37], emotion recognition [32], human detection [5], and deception detection [3]. Taking video action recognition as example, most works focused on raw video analysis using deep learning models and optical flow [24,28] without limit of storage and computation. Furthermore, compressed video action recognition [30] is proposed to replace raw videos (i.e., RGB frames) with compressed representations (e.g., , which retain only a few key frames and their offsets (i.e., motion vectors and residual errors) for storage reduction. However, none of these works can be directly applied on mobile devices, which have limited storage for both video data and analysis model. Therefore, the next-generation video analysis technique on mobile devices is expected to satisfy that 1) the framework is lightweight; 2) the model is able to deal with compressed videos. In this paper, we present a novel video analysis task, called mobile video action recognition, to fill in the above gap in video analysis (also see Table 1). Specifically, mobile video action recognition aims to perform efficient action recognition with compressed videos on mobile devices, considering the limit of storage and computation. This novel task meets the tendency of video analysis on mobile phones. On one hand, action recognition can help to automatically tag users' videos on social networking apps (e.g., Facebook and Instagram) before uploading from mobile phones. On the other hand, conducting video analysis on mobile devices can reduce the overload and computation on the cloud server. The key challenges of mobile video action recognition are: 1) how to design a lightweight and high-performance framework applicable on mobile devices; 2) how to extract Table 1: Overview of state-of-the-art video analysis models. "Compressed" denotes the usage of compressed videos. "Mobile" denotes whether a model is applicable on mobile devices. Compressed Mobile No Yes No C3D [27], Two-Stream [24] Bottleneck-LSTM [19] TSN [28], TS R-CNN [21] Yes CoViAR [30], DTMV-CNN [35] TTP (Ours) meaningful representations from compressed videos. Thanks to the rapid development of neural network compression [19,23,31,33], the compressed small networks can be used as the backbone for overcoming the first challenge, which balance the effectiveness and efficiency. In addition, compressed videos (e.g., MPEG-4) have been exploited [35] to replace raw videos for saving storage: multiple modalities (including RGB I-frame (I), low-resolution motion vector (MV) and residual (R) [30]) are extracted to avoid bringing in extra optical flow data. However, [30] used three deep models to process I-frames, motion vectors and residuals, and simply summed prediction scores from all these modalities in an ensemble manner. We argue that the ensemble strategy cannot fully capture the inner interactions between different modalities from compressed videos. Representative baselines include CoViAR [30], twostream network [24], C3D [6], and ResNet-152 [8]. The node size denotes the scale (i.e., parameters) of the corresponding model. According to the above considerations, we propose a lightweight framework to solve the mobile video action recognition task. Specifically, we employ MobileNetV2 [23] as the backbone network to process the multiple modalities (including RGB I-frame I, motion vectors MV, and residual R) extracted from compressed videos. Instead of the score ensemble strategy used in [30], we further propose a novel Temporal Trilinear Pooling (TTP) for fusing the extracted multiple modalities. In addition to motion vectors, we also put forward a temporal fusion method by combining successive key frames to achieve better temporal representation. As shown in Table 1, the proposed TTP does fill in the gap in video analysis. Particularly, the efficiency test on a mobile device indicates that our TTP can perform mobile video action recognition at about 40FPS (see Table 2). Moreover, as shown in Figure 1, our TTP achieves competitive performance but with extremely fewer parameters, as compared to state-of-the-art methods for video action recognition. This observation is further supported by the extensive results reported on two action recognition benchmarks (see Table 4). Our contributions are: (1) We present a novel mobile video action recognition task, which aims to perform efficient action recognition with compressed videos on mobile devices. This novel task fills in the gap in action recognition and video analysis. (2) We propose a lightweight framework with a trilinear pooling module for fusing the multiple modalities extracted from compressed videos. (3) In addition to the motion vector information used for replacing optical flow, we provide a temporal fusion method to explicitly induce the temporal context into mobile video action recognition. Problem Definition We have discussed the background of video action recognition in Sections 1 and 2. In this section, we first give the details of extracting multiple modalities from compressed videos for action recognition, and then define our mobile video action recognition problem with the extracted modalities. Extracting Multiple Modalities from Compressed Videos We follow [17,30] for the usage of compressed videos in action recognition. Formally, there are several segments in a compressed video. Given one video segment with n frames, we treat all frames as a sequence F = [f 0 , f 1 , . . . , f n−1 ], where each frame f i ∈ R h×w×3 is a RGB image with 3 channels, and h and w represent the height and width respectively. Since video compression leverages the spatial continuity of adjacent frames, only some key frames instead of all frames need to be stored. Against storing all frames ([f 0 , f 1 , . . . , f n−1 ]) in uncompressed videos, compressed videos only store f 0 as the intra-coded frame in this segment, which is called I-frame (I). The rest frames in this segment are treated as P-frames, where the differences between each P-frame f i and the I-frame f 0 are extracted and stored as motion vector (MV) and residual information (R). Specifically, the motion information is described as "motion vector" and stored in P-frames as MV = [MV 1 , MV 2 , . . . , MV n−1 ] where MV i ∈ R h×w×2 . Note that motion vectors have only 2 channels because the motion information is described in height and width channels but not color channels. For each position (mv ix , mv iy ) in MV i , mv ix and mv iy depict the horizontal and vertical movements from frame f 0 to frame f i , where x and y are the coordinates of the grid. However, the motion vector itself is not enough to store all the information in f i , and there still exists some small differentiation. The differentiation between I-frame and the I-frame plus motion vector is stored as residual information R = [R 1 , R 2 , . . . , R n−1 ], where R i ∈ R h×w×3 is an RGB-like image. Therefore, F is transformed to F = [f 0 , f 1 , . . . , f n−1 ], where each frame f i is formalized as f i = f i (x, y, z) = f 0 (x, y, z) i = 0 f 0 (x + mv ix , y + mv iy , z) + r i (x, y, z) i = 0 .(1) where x, y, z denote the coordinates in the three dimensions. Note that each video segment generates one I-frame and several P-frames. In MPEG-4 video, the segment length n equals 12 in average, so that each I-frame has their individual 11 P-frames. Since the compressed video is the universal format stored in mobile devices, we can extract motion vectors directly from compression videos to simulate the optical flow, which could assist to construct the temporal structure for action recognition. Mobile Video Action Recognition with Multiple Modalities We formally define our mobile video action recognition problem as follows. Given the three modalities (i.e. I, MV, and R) extracted from a compressed video, the goal of mobile video action recognition is to perform efficient action recognition (i.e., aligning the unique action category with the video) on mobile devices. Recent works utilize different CNNs to process the modalities independently. In [30], Resnet-152 [8] is used to process I, which is effective but consumes too much time and storage (see Table 1). In this work, we uniformly adopt MobileNetV2 [23] as backbone. Specifically, MoblieNet I takes I ∈ R h×w×3 as input, while MoblieNet M V takes MV and MoblieNet R takes R as input separately. In [30], the three modalities are only late fused at the test phase, and each modality is processed independently during the training stage. Therefore, the interactions between different modalities are not fully explored, and the multi-modal fusion is not consistent between the training and test stages, which limits the performance of action recognition. Methodology In this section, we first introduce the trilinear pooling method for multi-modal fusion in mobile video action recognition. We then refine the multi-modal representation by inducing the temporal context information. The overall architecture is illustrated in Figure 2(a). Trilinear Pooling In this work, we propose a novel trilinear pooling module to model three factor variations together. This is motivated by bilinear models [18,34], which are initially proposed to model two-factor variations, such as "style" and "content". For mobile video action recognition, we generalize the bilinear pooling method to fuse the three modalities extracted from compressed videos. Specifically, a feature vector (usually deep one) is denoted as x ∈ R c , where c is the dimensions of the feature x. The bilinear combination of two feature vectors with the same dimension x ∈ R c and y ∈ R c is defined as xy T ∈ R c×c [18]. In general, given two representation matrices X = [x 1 ; x 2 ; · · · ; x K ] ∈ R c×K and Y = [y 1 ; y 2 ; · · · ; y K ] ∈ R c×K for two frames, a pooling layer follows the bilinear combination of these two matrices: f BP (X, Y ) = 1 K K i=1 x i y T i = 1 K XY T .(2) It can be clearly seen that the above bilinear pooling allows the outputs (X and Y ) of the feature extractor to be conditioned on each other by considering all their pairwise interactions in form of a quadratic kernel expansion. However, this results in very high-dimensional features with a large number of parameters involved. To address this problem, Multi-modal Factorized Bilinear (MFB) [34] introduces an efficient attention mechanism into the original bilinear pooling based on the Hadamard product. The full MFB model is defined as follows: f MFB (x, y) i = x T F i y,(3) where F = [F 1 ; F 2 ; · · · ; F D ] ∈ R c×c×D , D is the number of projection matrices, and each x T F i y contributes to one dimension of the resultant feature. Obviously, the MFB model reduces the output dimension from c × c to D, which causes much less computational cost. According to the matrix factorization technique [22], each F i in the F can be factorized into two low-rank matrices: f MFB (x, y) i = x T F i y = x T U i V T i y = d j=1 x T u ij v T ij y = 1 T (U T i x V T i y),(4) where U i ∈ R c×d and V i ∈ R c×d are projection matrices, is the Hadamard product, 1 ∈ R d is an all-one vector, and d denotes the dimension of these factorized matrices. Therefore, we only need to learn U = [U 1 ; U 2 ; · · · ; U D ] ∈ R c×d×D and V = [V 1 ; V 2 ; · · · ; V D ] ∈ R c×d×D . Inspired by Eq.(4), we propose a novel trilinear pooling method, which aims to fuse three feature vectors (x, y and z). Unlike bilinear pooling that can combine only two feature vectors, our Trilinear Pooling method fuse x, y and z using the Hadamard product: f TP (x, y, z) = 1 T (U T x V T y W T z),(5) where W is also a projection matrix W = [W 1 ; W 2 ; · · · ; W D ] ∈ R c×d×D , and f TP denotes the output of trilinear pooling. Note that our proposed trilinear pooling degrades to MFB if all elements in W and z are fixed as 1. When the inputs are generalized to feature maps (i.e., X = [ x i ], Y = [y i ], Z = [z i ] ∈ R c×K ) , every position of these feature maps makes up one group of inputs, and the outputs of them are summed element-wised (sum pooling) as: f TP (X, Y, Z) = K i=1 f TP (x i , y i , z i ).(6) In this paper, we utilize the Trilinear Pooling model to obtain the multi-modal representation of t-th segment by fusing I-frame I t , motion vector MV t and residual R t (see Figure 2(b)): f TP (I t , MV t , R t ) = K i=1 f TP (I t,i , MV t,i , R t,i ),(7) where I t , MV t and R t are the output feature maps from penultimate layer of MobileNet I , MobileNet M V and MobileNet R , respectively. For each segment t, the only I-frame is selected as I t , while one MV t and one R t are randomly selected. As in [12], the trilinear vector is then processed with a signed square root step (f ← sign(f ) |f |), followed by l 2 normalization (f ← f /||f ||). Temporal Trilinear Pooling Motion vector is initially introduced to represent the temporal structure as optical flow does. However, compared to the high-resolution optical flow, motion vector is so coarse that it only describes the movement of blocks, and thus all values within the same macro-block surrounded by (mv ix , mv iy ) are identical. Although we have proposed trilinear pooling to address this drawback, the temporal information needs to be explicitly explored. Since residuals are also computed as the difference between the their I-frame, they are strongly correlated with motion vectors. Therefore, we treat the motion vectors and the residuals integrally. Note that the fusion of I t , MV t and R t within only one segment is not enough to capture the temporal information. We further choose to include the adjacent segment's information. Specifically, in addition to calculating f TP (I t , MV t , R t ) by trilinear pooling, we also combine MV t and R t with I t+∆t (i.e. the I-frame in the adjacent segment). The output of temporal trilinear pooling is defined as (see Figure 2(a)): f TTP (t) = f TP (t, 0) + f TP (t, ∆t)(8) where f TP (t, ∆t) denotes f TP (I t+∆t , MV t , R t ) for simplicity. In this paper, we sample the offset ∆t from {−1, 1} during the training stage. During the test stage, ∆t is fixed as 1 for the first frame and −1 for other frames. This temporal fusion method solves the temporal representation drawback without introducing extra parameters, which is efficient and can be implemented on mobile devices. The TTP representation is further put into a fully connected layer to calculate the classification scores s(t) = P T f TTP (t), where P ∈ R D×C is learnable parameters and C is the number of categories. Since mobile video action recognition can be regarded as a multi-class classification problem, we utilize the standard cross-entropy loss for model training: L T T P (t) = − log softmax(s gt (t))(9) where s gt (t) is the predicted score for t-th segment with respect to its ground-truth class label. Experiments Datasets In this paper, we report results on two widely used benchmark dataset, including HMDB-51 [15] and UCF-101 [25]. HMDB-51 contains 6,766 videos from 51 action categories, while UCF-101 contains 13,320 videos from 101 action categories. Both datasets have 3 officially given training/test splits. In HMDB-51, each training/test split contains 3,570 training clips and 1,530 testing clips. In UCF-101, each training/test split contains different number of clips, approximately 9,600 clips in the training split and 3,700 clips in the test split. Since each video in these two datasets is a short clip belonging to a single category, we employ top-1 accuracy on video-level class prediction as the evaluation metric. Implementation Details As in [28,30], we resize frames in all videos to 340 × 256. In order to implement our model on mobile devices, we simply choose three MobileNetV2 [23] pretrained on ImageNet as the core CNN module to extract the representations of I-frames, motion vectors and residuals. All the parameters of the projection layers are randomly initialized. The raw videos are encoded in MPEG-4 format. In the training phase, in addition to picking 3 segments randomly in each video clip, we also pick three adjacent I-frames for each selected segment for the proposed temporal fusion method. We apply color jittering, horizontally flipping and random cropping to 224 × 224 for data augmentation, as in previous works. There are two stages in training phase. Firstly, following [30], we fine-tune the three CNN modules by using the I-frames, motion vectors and residuals data independently. Secondly, we jointly fine-tune the CNN modules and the TTP module. The two training stages are done with the Adam optimizer: the learning rate is set as 0.01 in the first stage and 0.005 in the second stage, multiplied by 0.1 on plateau. According to [34], the dimension of projection layers is empirically set to 8,192. In the test phase, we also follow [30] by randomly sampling 25 segments and their adjacent I-frames for each video. We apply horizontal flips and 5 random crops for data augmentation. All experiments are implemented with PyTorch. In limited-resource simulation for mobile video action recognition, we select the Nvidia Jetson TX2 platform with 8G memory, 32G storage and a Nvidia Pascal GPU. Moreover, in sufficient-resource simulation for conventional video action recognition, we select the Dell R730 platform with Nvidia Titan Xp GPU. Efficiency Test We firstly demonstrate the per-frame running time and FPS of our model in both limited-resource and sufficient-resource environments. We also make comparison to the state-of-the-art CoViAR [30], since it also exploits compressed videos and does not use the optical flow information. However, CoViAR utilizes ResNet-152 for I-frames and two ResNet-18 for motion vectors and residual independently, which occupy too much space and are slow to run. We compare the efficiency of the two models (i.e. CoViAR and our model) under exactly the same setting. On the Dell R730 platform, the preprocessing phase (including loading networks) is mainly run on two Intel Xeon Silver 4110 CPUs, and the CNN forwarding phase (including extracting motion vectors and residuals) is mainly run on one TITAN Xp GPU. As shown in Table 2, our TTP model runs faster than CoViAR in both pre-processing and CNN phases on the Dell R730 platform. The reason is that CoViAR has three large networks with lots of parameters, resulting in huge cost Table 4. Ablation Study We conduct experiments to show the benefits of using our TTP model compared with single modality and other fusion options applicable on mobile devices. Specifically, we uniformly use three Mo-bileNetV2 networks to process the three components (I, MV and R) extracted from compressed videos, and demonstrate all the ablative results on the two benchmarks by training with different parts of our TTP model. For single-modality based models, "I", "MV" and "R" denote the results obtained by using MobileNet I , MobileNet M V and MobileNet R , respectively. For late-fusion based model, "I+MV+R" indicates that the output is fused by simply adding the score of the three CNNs together. We also compare our TTP model with existing bilinear pooling models, which are the degraded forms of our Trilinear Pooling. Since existing bilinear pooling methods cannot be directly adapted to the three modalities, we simply apply pairwise combination over them, and sum the three predicted classification scores together like "I+MV+R". Since conventional bilinear pooling [18] and factorized bilinear pooling [12] have too many parameters to be implemented on mobile devices, we resort to compact bilinear pooling [7] (denoted as "BP"), which has much fewer parameters. Finally, "TP" is obtained by implementing our Trilinear Pooling module, while "TTP" denotes our proposed Temporal Trilinear Pooling module. As shown in Table 3, single-modality based models (i.e., I, MV, or R) could not achieve good performance without using multi-modal information, which indicates that the compressed video needs to be fully explored in order to obtain high accuracy. I and R yield similar result because they both contain the RGB data: the I-frames contain a small number of informative frames, while the residuals contain a large number of less informative frames. Since the motion vectors only contain the motion information, MV could not perform as well as the other two for action recognition. For multi-modal fusion, all bilinear/trilinear pooling methods outperform I+MV+R, showing the power of pooling methods instead of linearly late fusion. Moreover, our TP method achieves %1 improvement over BP, which validates the effectiveness of our proposed pooling method. In addition, TTP outperforms TP, demonstrating the importance of the temporal information. Comparative Results We make comprehensive comparison between our TTP method and other state-of-the-art action recognition methods. To this end, we compute the efficiency (i.e., parameters and GFLOPS) and top-1 accuracy as the evaluation metrics for action recognition. In our experiments, all compared methods can be divided into two groups: 1) Raw-Video Based Methods: LRCN [4] and Composite LSTM [26] utilize RNNs to process the optical flow information, while Two-stream [24] adopts twoway CNNs to process the optical flow information. C3D [27], TSN (RGB-only) [28] and ResNet [8] employ large CNN models over RGB frames without using other information. 2) Compressed-Video Based Methods: DTMV-CNN [35] integrates both compressed videos and raw ones into two-stream network, while CoViAR [30] is the most closely related model (w.r.t. our TTP) that replaces raw videos by compressed ones, without using any optical flow for action recognition. The comparative results are shown in Table 4. We have the following observations: (1) Our TTP model is the most efficient for video action recognition. Specifically, it contains only 17.5 ×10 6 parameters and has only average 1.4 GFLOPs over all frames. (2) Our TTP model outperforms most of state-of-the-art methods according to both efficiency and accuracy. (3) Although our TTP model achieves slightly lower accuracies than Two-stream, CoViAR and DTMV-CNN, it leads to significant efficiency improvements over these three methods. (4) As compared to CoViAR, our TTP model saves nearly 80% of the storage. Note that "I+MV+R" in Table 3 is essentially CoViAR by using MobileNetV2 as the core CNN module. Under such fair comparison setting, our TTP model consistently yields accuracy improvements over CoViAR on most of the dataset splits. (5) Compressed-video based models generally competitive accuracies compared to raw-video based models, showing the high feasibility of exploring compressed video in action recognition. Conclusion Video action recognition is a hot topic on computer vision. However, in the mobile computing age, there lacks the exploration of this task on mobile devices. In this paper, we thus focus on a novel mobile video action recognition task, where only the computational capabilities of mobile devices are accessible. Instead of raw videos with huge storage, we choose to extract multiple modalities (including I-frames, motion vectors, and residuals) directly from compressed videos. By employing MobileNetV2 as the backbone network, we propose a novel Temporal Trilinear Pooling (TTP) module to fuse the multiple modalities for mobile video action recognition. In addition to motion vectors, we also provide a temporal fusion method by combining successive key frames to achieve better temporal representation. The efficiency test on a mobile device indicates that our TTP model can perform mobile video action recognition at about 40FPS. The comparative results on two benchmarks demonstrate that our TTP model outperforms existing action recognition methods in model size and time consuming, but with competitive accuracy. In our ongoing research, we will introduce the attention strategy into mobile video action recognition to obtain better results.
4,102
1908.10155
2970282336
Video action recognition, which is topical in computer vision and video analysis, aims to allocate a short video clip to a pre-defined category such as brushing hair or climbing stairs. Recent works focus on action recognition with deep neural networks that achieve state-of-the-art results in need of high-performance platforms. Despite the fast development of mobile computing, video action recognition on mobile devices has not been fully discussed. In this paper, we focus on the novel mobile video action recognition task, where only the computational capabilities of mobile devices are accessible. Instead of raw videos with huge storage, we choose to extract multiple modalities (including I-frames, motion vectors, and residuals) directly from compressed videos. By employing MobileNetV2 as backbone, we propose a novel Temporal Trilinear Pooling (TTP) module to fuse the multiple modalities for mobile video action recognition. In addition to motion vectors, we also provide a temporal fusion method to explicitly induce the temporal context. The efficiency test on a mobile device indicates that our model can perform mobile video action recognition at about 40FPS. The comparative results on two benchmarks show that our model outperforms existing action recognition methods in model size and time consuming, but with competitive accuracy.
Recently, lightweight neural networks including SqeezeNet @cite_15 , Xception @cite_8 , ShuffleNet @cite_31 , ShuffleNetV2 @cite_7 , MobileNet @cite_12 , and MobileNetV2 @cite_10 have been proposed to run on mobile devices with the parameters and computation reduced significantly. Since we focus on mobile video action recognition, all these lightweight models could be use as backbone.
{ "abstract": [ "We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8 ) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves 13A— actual speedup over AlexNet while maintaining comparable accuracy.", "Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.", "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.", "Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).", "In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.", "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization." ], "cite_N": [ "@cite_31", "@cite_7", "@cite_8", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2963125010", "2883780447", "2531409750", "2279098554", "2963163009", "2612445135" ] }
Mobile Video Action Recognition
Video analysis has drawn increasing attention from the computer vision community, for that videos occupy more than 75% of the global IP traffic [11]. With the development of deep learning methods, recent works achieve promising performance on video analysis tasks, including action recognition [6,21,24,28,37], emotion recognition [32], human detection [5], and deception detection [3]. Taking video action recognition as example, most works focused on raw video analysis using deep learning models and optical flow [24,28] without limit of storage and computation. Furthermore, compressed video action recognition [30] is proposed to replace raw videos (i.e., RGB frames) with compressed representations (e.g., , which retain only a few key frames and their offsets (i.e., motion vectors and residual errors) for storage reduction. However, none of these works can be directly applied on mobile devices, which have limited storage for both video data and analysis model. Therefore, the next-generation video analysis technique on mobile devices is expected to satisfy that 1) the framework is lightweight; 2) the model is able to deal with compressed videos. In this paper, we present a novel video analysis task, called mobile video action recognition, to fill in the above gap in video analysis (also see Table 1). Specifically, mobile video action recognition aims to perform efficient action recognition with compressed videos on mobile devices, considering the limit of storage and computation. This novel task meets the tendency of video analysis on mobile phones. On one hand, action recognition can help to automatically tag users' videos on social networking apps (e.g., Facebook and Instagram) before uploading from mobile phones. On the other hand, conducting video analysis on mobile devices can reduce the overload and computation on the cloud server. The key challenges of mobile video action recognition are: 1) how to design a lightweight and high-performance framework applicable on mobile devices; 2) how to extract Table 1: Overview of state-of-the-art video analysis models. "Compressed" denotes the usage of compressed videos. "Mobile" denotes whether a model is applicable on mobile devices. Compressed Mobile No Yes No C3D [27], Two-Stream [24] Bottleneck-LSTM [19] TSN [28], TS R-CNN [21] Yes CoViAR [30], DTMV-CNN [35] TTP (Ours) meaningful representations from compressed videos. Thanks to the rapid development of neural network compression [19,23,31,33], the compressed small networks can be used as the backbone for overcoming the first challenge, which balance the effectiveness and efficiency. In addition, compressed videos (e.g., MPEG-4) have been exploited [35] to replace raw videos for saving storage: multiple modalities (including RGB I-frame (I), low-resolution motion vector (MV) and residual (R) [30]) are extracted to avoid bringing in extra optical flow data. However, [30] used three deep models to process I-frames, motion vectors and residuals, and simply summed prediction scores from all these modalities in an ensemble manner. We argue that the ensemble strategy cannot fully capture the inner interactions between different modalities from compressed videos. Representative baselines include CoViAR [30], twostream network [24], C3D [6], and ResNet-152 [8]. The node size denotes the scale (i.e., parameters) of the corresponding model. According to the above considerations, we propose a lightweight framework to solve the mobile video action recognition task. Specifically, we employ MobileNetV2 [23] as the backbone network to process the multiple modalities (including RGB I-frame I, motion vectors MV, and residual R) extracted from compressed videos. Instead of the score ensemble strategy used in [30], we further propose a novel Temporal Trilinear Pooling (TTP) for fusing the extracted multiple modalities. In addition to motion vectors, we also put forward a temporal fusion method by combining successive key frames to achieve better temporal representation. As shown in Table 1, the proposed TTP does fill in the gap in video analysis. Particularly, the efficiency test on a mobile device indicates that our TTP can perform mobile video action recognition at about 40FPS (see Table 2). Moreover, as shown in Figure 1, our TTP achieves competitive performance but with extremely fewer parameters, as compared to state-of-the-art methods for video action recognition. This observation is further supported by the extensive results reported on two action recognition benchmarks (see Table 4). Our contributions are: (1) We present a novel mobile video action recognition task, which aims to perform efficient action recognition with compressed videos on mobile devices. This novel task fills in the gap in action recognition and video analysis. (2) We propose a lightweight framework with a trilinear pooling module for fusing the multiple modalities extracted from compressed videos. (3) In addition to the motion vector information used for replacing optical flow, we provide a temporal fusion method to explicitly induce the temporal context into mobile video action recognition. Problem Definition We have discussed the background of video action recognition in Sections 1 and 2. In this section, we first give the details of extracting multiple modalities from compressed videos for action recognition, and then define our mobile video action recognition problem with the extracted modalities. Extracting Multiple Modalities from Compressed Videos We follow [17,30] for the usage of compressed videos in action recognition. Formally, there are several segments in a compressed video. Given one video segment with n frames, we treat all frames as a sequence F = [f 0 , f 1 , . . . , f n−1 ], where each frame f i ∈ R h×w×3 is a RGB image with 3 channels, and h and w represent the height and width respectively. Since video compression leverages the spatial continuity of adjacent frames, only some key frames instead of all frames need to be stored. Against storing all frames ([f 0 , f 1 , . . . , f n−1 ]) in uncompressed videos, compressed videos only store f 0 as the intra-coded frame in this segment, which is called I-frame (I). The rest frames in this segment are treated as P-frames, where the differences between each P-frame f i and the I-frame f 0 are extracted and stored as motion vector (MV) and residual information (R). Specifically, the motion information is described as "motion vector" and stored in P-frames as MV = [MV 1 , MV 2 , . . . , MV n−1 ] where MV i ∈ R h×w×2 . Note that motion vectors have only 2 channels because the motion information is described in height and width channels but not color channels. For each position (mv ix , mv iy ) in MV i , mv ix and mv iy depict the horizontal and vertical movements from frame f 0 to frame f i , where x and y are the coordinates of the grid. However, the motion vector itself is not enough to store all the information in f i , and there still exists some small differentiation. The differentiation between I-frame and the I-frame plus motion vector is stored as residual information R = [R 1 , R 2 , . . . , R n−1 ], where R i ∈ R h×w×3 is an RGB-like image. Therefore, F is transformed to F = [f 0 , f 1 , . . . , f n−1 ], where each frame f i is formalized as f i = f i (x, y, z) = f 0 (x, y, z) i = 0 f 0 (x + mv ix , y + mv iy , z) + r i (x, y, z) i = 0 .(1) where x, y, z denote the coordinates in the three dimensions. Note that each video segment generates one I-frame and several P-frames. In MPEG-4 video, the segment length n equals 12 in average, so that each I-frame has their individual 11 P-frames. Since the compressed video is the universal format stored in mobile devices, we can extract motion vectors directly from compression videos to simulate the optical flow, which could assist to construct the temporal structure for action recognition. Mobile Video Action Recognition with Multiple Modalities We formally define our mobile video action recognition problem as follows. Given the three modalities (i.e. I, MV, and R) extracted from a compressed video, the goal of mobile video action recognition is to perform efficient action recognition (i.e., aligning the unique action category with the video) on mobile devices. Recent works utilize different CNNs to process the modalities independently. In [30], Resnet-152 [8] is used to process I, which is effective but consumes too much time and storage (see Table 1). In this work, we uniformly adopt MobileNetV2 [23] as backbone. Specifically, MoblieNet I takes I ∈ R h×w×3 as input, while MoblieNet M V takes MV and MoblieNet R takes R as input separately. In [30], the three modalities are only late fused at the test phase, and each modality is processed independently during the training stage. Therefore, the interactions between different modalities are not fully explored, and the multi-modal fusion is not consistent between the training and test stages, which limits the performance of action recognition. Methodology In this section, we first introduce the trilinear pooling method for multi-modal fusion in mobile video action recognition. We then refine the multi-modal representation by inducing the temporal context information. The overall architecture is illustrated in Figure 2(a). Trilinear Pooling In this work, we propose a novel trilinear pooling module to model three factor variations together. This is motivated by bilinear models [18,34], which are initially proposed to model two-factor variations, such as "style" and "content". For mobile video action recognition, we generalize the bilinear pooling method to fuse the three modalities extracted from compressed videos. Specifically, a feature vector (usually deep one) is denoted as x ∈ R c , where c is the dimensions of the feature x. The bilinear combination of two feature vectors with the same dimension x ∈ R c and y ∈ R c is defined as xy T ∈ R c×c [18]. In general, given two representation matrices X = [x 1 ; x 2 ; · · · ; x K ] ∈ R c×K and Y = [y 1 ; y 2 ; · · · ; y K ] ∈ R c×K for two frames, a pooling layer follows the bilinear combination of these two matrices: f BP (X, Y ) = 1 K K i=1 x i y T i = 1 K XY T .(2) It can be clearly seen that the above bilinear pooling allows the outputs (X and Y ) of the feature extractor to be conditioned on each other by considering all their pairwise interactions in form of a quadratic kernel expansion. However, this results in very high-dimensional features with a large number of parameters involved. To address this problem, Multi-modal Factorized Bilinear (MFB) [34] introduces an efficient attention mechanism into the original bilinear pooling based on the Hadamard product. The full MFB model is defined as follows: f MFB (x, y) i = x T F i y,(3) where F = [F 1 ; F 2 ; · · · ; F D ] ∈ R c×c×D , D is the number of projection matrices, and each x T F i y contributes to one dimension of the resultant feature. Obviously, the MFB model reduces the output dimension from c × c to D, which causes much less computational cost. According to the matrix factorization technique [22], each F i in the F can be factorized into two low-rank matrices: f MFB (x, y) i = x T F i y = x T U i V T i y = d j=1 x T u ij v T ij y = 1 T (U T i x V T i y),(4) where U i ∈ R c×d and V i ∈ R c×d are projection matrices, is the Hadamard product, 1 ∈ R d is an all-one vector, and d denotes the dimension of these factorized matrices. Therefore, we only need to learn U = [U 1 ; U 2 ; · · · ; U D ] ∈ R c×d×D and V = [V 1 ; V 2 ; · · · ; V D ] ∈ R c×d×D . Inspired by Eq.(4), we propose a novel trilinear pooling method, which aims to fuse three feature vectors (x, y and z). Unlike bilinear pooling that can combine only two feature vectors, our Trilinear Pooling method fuse x, y and z using the Hadamard product: f TP (x, y, z) = 1 T (U T x V T y W T z),(5) where W is also a projection matrix W = [W 1 ; W 2 ; · · · ; W D ] ∈ R c×d×D , and f TP denotes the output of trilinear pooling. Note that our proposed trilinear pooling degrades to MFB if all elements in W and z are fixed as 1. When the inputs are generalized to feature maps (i.e., X = [ x i ], Y = [y i ], Z = [z i ] ∈ R c×K ) , every position of these feature maps makes up one group of inputs, and the outputs of them are summed element-wised (sum pooling) as: f TP (X, Y, Z) = K i=1 f TP (x i , y i , z i ).(6) In this paper, we utilize the Trilinear Pooling model to obtain the multi-modal representation of t-th segment by fusing I-frame I t , motion vector MV t and residual R t (see Figure 2(b)): f TP (I t , MV t , R t ) = K i=1 f TP (I t,i , MV t,i , R t,i ),(7) where I t , MV t and R t are the output feature maps from penultimate layer of MobileNet I , MobileNet M V and MobileNet R , respectively. For each segment t, the only I-frame is selected as I t , while one MV t and one R t are randomly selected. As in [12], the trilinear vector is then processed with a signed square root step (f ← sign(f ) |f |), followed by l 2 normalization (f ← f /||f ||). Temporal Trilinear Pooling Motion vector is initially introduced to represent the temporal structure as optical flow does. However, compared to the high-resolution optical flow, motion vector is so coarse that it only describes the movement of blocks, and thus all values within the same macro-block surrounded by (mv ix , mv iy ) are identical. Although we have proposed trilinear pooling to address this drawback, the temporal information needs to be explicitly explored. Since residuals are also computed as the difference between the their I-frame, they are strongly correlated with motion vectors. Therefore, we treat the motion vectors and the residuals integrally. Note that the fusion of I t , MV t and R t within only one segment is not enough to capture the temporal information. We further choose to include the adjacent segment's information. Specifically, in addition to calculating f TP (I t , MV t , R t ) by trilinear pooling, we also combine MV t and R t with I t+∆t (i.e. the I-frame in the adjacent segment). The output of temporal trilinear pooling is defined as (see Figure 2(a)): f TTP (t) = f TP (t, 0) + f TP (t, ∆t)(8) where f TP (t, ∆t) denotes f TP (I t+∆t , MV t , R t ) for simplicity. In this paper, we sample the offset ∆t from {−1, 1} during the training stage. During the test stage, ∆t is fixed as 1 for the first frame and −1 for other frames. This temporal fusion method solves the temporal representation drawback without introducing extra parameters, which is efficient and can be implemented on mobile devices. The TTP representation is further put into a fully connected layer to calculate the classification scores s(t) = P T f TTP (t), where P ∈ R D×C is learnable parameters and C is the number of categories. Since mobile video action recognition can be regarded as a multi-class classification problem, we utilize the standard cross-entropy loss for model training: L T T P (t) = − log softmax(s gt (t))(9) where s gt (t) is the predicted score for t-th segment with respect to its ground-truth class label. Experiments Datasets In this paper, we report results on two widely used benchmark dataset, including HMDB-51 [15] and UCF-101 [25]. HMDB-51 contains 6,766 videos from 51 action categories, while UCF-101 contains 13,320 videos from 101 action categories. Both datasets have 3 officially given training/test splits. In HMDB-51, each training/test split contains 3,570 training clips and 1,530 testing clips. In UCF-101, each training/test split contains different number of clips, approximately 9,600 clips in the training split and 3,700 clips in the test split. Since each video in these two datasets is a short clip belonging to a single category, we employ top-1 accuracy on video-level class prediction as the evaluation metric. Implementation Details As in [28,30], we resize frames in all videos to 340 × 256. In order to implement our model on mobile devices, we simply choose three MobileNetV2 [23] pretrained on ImageNet as the core CNN module to extract the representations of I-frames, motion vectors and residuals. All the parameters of the projection layers are randomly initialized. The raw videos are encoded in MPEG-4 format. In the training phase, in addition to picking 3 segments randomly in each video clip, we also pick three adjacent I-frames for each selected segment for the proposed temporal fusion method. We apply color jittering, horizontally flipping and random cropping to 224 × 224 for data augmentation, as in previous works. There are two stages in training phase. Firstly, following [30], we fine-tune the three CNN modules by using the I-frames, motion vectors and residuals data independently. Secondly, we jointly fine-tune the CNN modules and the TTP module. The two training stages are done with the Adam optimizer: the learning rate is set as 0.01 in the first stage and 0.005 in the second stage, multiplied by 0.1 on plateau. According to [34], the dimension of projection layers is empirically set to 8,192. In the test phase, we also follow [30] by randomly sampling 25 segments and their adjacent I-frames for each video. We apply horizontal flips and 5 random crops for data augmentation. All experiments are implemented with PyTorch. In limited-resource simulation for mobile video action recognition, we select the Nvidia Jetson TX2 platform with 8G memory, 32G storage and a Nvidia Pascal GPU. Moreover, in sufficient-resource simulation for conventional video action recognition, we select the Dell R730 platform with Nvidia Titan Xp GPU. Efficiency Test We firstly demonstrate the per-frame running time and FPS of our model in both limited-resource and sufficient-resource environments. We also make comparison to the state-of-the-art CoViAR [30], since it also exploits compressed videos and does not use the optical flow information. However, CoViAR utilizes ResNet-152 for I-frames and two ResNet-18 for motion vectors and residual independently, which occupy too much space and are slow to run. We compare the efficiency of the two models (i.e. CoViAR and our model) under exactly the same setting. On the Dell R730 platform, the preprocessing phase (including loading networks) is mainly run on two Intel Xeon Silver 4110 CPUs, and the CNN forwarding phase (including extracting motion vectors and residuals) is mainly run on one TITAN Xp GPU. As shown in Table 2, our TTP model runs faster than CoViAR in both pre-processing and CNN phases on the Dell R730 platform. The reason is that CoViAR has three large networks with lots of parameters, resulting in huge cost Table 4. Ablation Study We conduct experiments to show the benefits of using our TTP model compared with single modality and other fusion options applicable on mobile devices. Specifically, we uniformly use three Mo-bileNetV2 networks to process the three components (I, MV and R) extracted from compressed videos, and demonstrate all the ablative results on the two benchmarks by training with different parts of our TTP model. For single-modality based models, "I", "MV" and "R" denote the results obtained by using MobileNet I , MobileNet M V and MobileNet R , respectively. For late-fusion based model, "I+MV+R" indicates that the output is fused by simply adding the score of the three CNNs together. We also compare our TTP model with existing bilinear pooling models, which are the degraded forms of our Trilinear Pooling. Since existing bilinear pooling methods cannot be directly adapted to the three modalities, we simply apply pairwise combination over them, and sum the three predicted classification scores together like "I+MV+R". Since conventional bilinear pooling [18] and factorized bilinear pooling [12] have too many parameters to be implemented on mobile devices, we resort to compact bilinear pooling [7] (denoted as "BP"), which has much fewer parameters. Finally, "TP" is obtained by implementing our Trilinear Pooling module, while "TTP" denotes our proposed Temporal Trilinear Pooling module. As shown in Table 3, single-modality based models (i.e., I, MV, or R) could not achieve good performance without using multi-modal information, which indicates that the compressed video needs to be fully explored in order to obtain high accuracy. I and R yield similar result because they both contain the RGB data: the I-frames contain a small number of informative frames, while the residuals contain a large number of less informative frames. Since the motion vectors only contain the motion information, MV could not perform as well as the other two for action recognition. For multi-modal fusion, all bilinear/trilinear pooling methods outperform I+MV+R, showing the power of pooling methods instead of linearly late fusion. Moreover, our TP method achieves %1 improvement over BP, which validates the effectiveness of our proposed pooling method. In addition, TTP outperforms TP, demonstrating the importance of the temporal information. Comparative Results We make comprehensive comparison between our TTP method and other state-of-the-art action recognition methods. To this end, we compute the efficiency (i.e., parameters and GFLOPS) and top-1 accuracy as the evaluation metrics for action recognition. In our experiments, all compared methods can be divided into two groups: 1) Raw-Video Based Methods: LRCN [4] and Composite LSTM [26] utilize RNNs to process the optical flow information, while Two-stream [24] adopts twoway CNNs to process the optical flow information. C3D [27], TSN (RGB-only) [28] and ResNet [8] employ large CNN models over RGB frames without using other information. 2) Compressed-Video Based Methods: DTMV-CNN [35] integrates both compressed videos and raw ones into two-stream network, while CoViAR [30] is the most closely related model (w.r.t. our TTP) that replaces raw videos by compressed ones, without using any optical flow for action recognition. The comparative results are shown in Table 4. We have the following observations: (1) Our TTP model is the most efficient for video action recognition. Specifically, it contains only 17.5 ×10 6 parameters and has only average 1.4 GFLOPs over all frames. (2) Our TTP model outperforms most of state-of-the-art methods according to both efficiency and accuracy. (3) Although our TTP model achieves slightly lower accuracies than Two-stream, CoViAR and DTMV-CNN, it leads to significant efficiency improvements over these three methods. (4) As compared to CoViAR, our TTP model saves nearly 80% of the storage. Note that "I+MV+R" in Table 3 is essentially CoViAR by using MobileNetV2 as the core CNN module. Under such fair comparison setting, our TTP model consistently yields accuracy improvements over CoViAR on most of the dataset splits. (5) Compressed-video based models generally competitive accuracies compared to raw-video based models, showing the high feasibility of exploring compressed video in action recognition. Conclusion Video action recognition is a hot topic on computer vision. However, in the mobile computing age, there lacks the exploration of this task on mobile devices. In this paper, we thus focus on a novel mobile video action recognition task, where only the computational capabilities of mobile devices are accessible. Instead of raw videos with huge storage, we choose to extract multiple modalities (including I-frames, motion vectors, and residuals) directly from compressed videos. By employing MobileNetV2 as the backbone network, we propose a novel Temporal Trilinear Pooling (TTP) module to fuse the multiple modalities for mobile video action recognition. In addition to motion vectors, we also provide a temporal fusion method by combining successive key frames to achieve better temporal representation. The efficiency test on a mobile device indicates that our TTP model can perform mobile video action recognition at about 40FPS. The comparative results on two benchmarks demonstrate that our TTP model outperforms existing action recognition methods in model size and time consuming, but with competitive accuracy. In our ongoing research, we will introduce the attention strategy into mobile video action recognition to obtain better results.
4,102
1908.09941
2970931569
In this paper, we study a family of non-convex and possibly non-smooth inf-projection minimization problems, where the target objective function is equal to minimization of a joint function over another variable. This problem includes difference of convex (DC) functions and a family of bi-convex functions as special cases. We develop stochastic algorithms and establish their first-order convergence for finding a (nearly) stationary solution of the target non-convex function under different conditions of the component functions. To the best of our knowledge, this is the first work that comprehensively studies stochastic optimization of non-convex inf-projection minimization problems with provable convergence guarantee. Our algorithms enable efficient stochastic optimization of a family of non-decomposable DC functions and a family of bi-convex functions. To demonstrate the power of the proposed algorithms we consider an important application in variance-based regularization, and experiments verify the effectiveness of our inf-projection based formulation and the proposed stochastic algorithm in comparison with previous stochastic algorithms based on the min-max formulation for achieving the same effect.
Another important result is following the Bennett's inequality. Corollary 5 in @cite_7 shows that: where @math is the sample variance. It is notable that @math is equivalent (with a constant scaling) to the empirical variance @math . Similarly, the above uniform estimate can be extended to infinite loss classes using different complexity measures .
{ "abstract": [ "We give improved constants for data dependent and variance sensitive confidence bounds, called empirical Bernstein bounds, and extend these inequalities to hold uniformly over classes of functions whose growth function is polynomial in the sample size n. The bounds lead us to consider sample variance penalization, a novel learning method which takes into account the empirical variance of the loss function. We give conditions under which sample variance penalization is effective. In particular, we present a bound on the excess risk incurred by the method. Using this, we argue that there are situations in which the excess risk of our method is of order 1 n, while the excess risk of empirical risk minimization is of order 1 √n. We show some experimental results, which confirm the theory. Finally, we discuss the potential application of our results to sample compression schemes." ], "cite_N": [ "@cite_7" ], "mid": [ "1786332878" ] }
Stochastic Optimization for Non-convex Inf-Projection Problems
0
1908.09941
2970931569
In this paper, we study a family of non-convex and possibly non-smooth inf-projection minimization problems, where the target objective function is equal to minimization of a joint function over another variable. This problem includes difference of convex (DC) functions and a family of bi-convex functions as special cases. We develop stochastic algorithms and establish their first-order convergence for finding a (nearly) stationary solution of the target non-convex function under different conditions of the component functions. To the best of our knowledge, this is the first work that comprehensively studies stochastic optimization of non-convex inf-projection minimization problems with provable convergence guarantee. Our algorithms enable efficient stochastic optimization of a family of non-decomposable DC functions and a family of bi-convex functions. To demonstrate the power of the proposed algorithms we consider an important application in variance-based regularization, and experiments verify the effectiveness of our inf-projection based formulation and the proposed stochastic algorithm in comparison with previous stochastic algorithms based on the min-max formulation for achieving the same effect.
An intuitive approach to considering the variance-based regularization is to include the first two terms on the right hand side into the objective, which is the formulation proposed in @cite_7 , i.e., sample variance penalty (SVP): An excess risk bound of @math may be achieved by solving the SVP. However, @cite_7 does not consider solution methods for solving the above variance-regularized empirical risk minimization problem.
{ "abstract": [ "We give improved constants for data dependent and variance sensitive confidence bounds, called empirical Bernstein bounds, and extend these inequalities to hold uniformly over classes of functions whose growth function is polynomial in the sample size n. The bounds lead us to consider sample variance penalization, a novel learning method which takes into account the empirical variance of the loss function. We give conditions under which sample variance penalization is effective. In particular, we present a bound on the excess risk incurred by the method. Using this, we argue that there are situations in which the excess risk of our method is of order 1 n, while the excess risk of empirical risk minimization is of order 1 √n. We show some experimental results, which confirm the theory. Finally, we discuss the potential application of our results to sample compression schemes." ], "cite_N": [ "@cite_7" ], "mid": [ "1786332878" ] }
Stochastic Optimization for Non-convex Inf-Projection Problems
0
1908.09941
2970931569
In this paper, we study a family of non-convex and possibly non-smooth inf-projection minimization problems, where the target objective function is equal to minimization of a joint function over another variable. This problem includes difference of convex (DC) functions and a family of bi-convex functions as special cases. We develop stochastic algorithms and establish their first-order convergence for finding a (nearly) stationary solution of the target non-convex function under different conditions of the component functions. To the best of our knowledge, this is the first work that comprehensively studies stochastic optimization of non-convex inf-projection minimization problems with provable convergence guarantee. Our algorithms enable efficient stochastic optimization of a family of non-decomposable DC functions and a family of bi-convex functions. To demonstrate the power of the proposed algorithms we consider an important application in variance-based regularization, and experiments verify the effectiveness of our inf-projection based formulation and the proposed stochastic algorithm in comparison with previous stochastic algorithms based on the min-max formulation for achieving the same effect.
Recently, @cite_17 proposed a min-max formulation based on distributionally robust optimization for variance-based regularization as following: where @math is a hyper-parameter, @math , @math , and @math is called the @math -divergence based on @math . The above problem is convex-concave when the loss function @math is convex in terms of @math . It is was shown in that the above min-max formulation is equivalent to the problem ) with a proper value of @math with high probability under the assumption that the number of training examples @math is sufficiently large (see Theorem 1 and Theorem 2 in @cite_17 ).
{ "abstract": [ "We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems." ], "cite_N": [ "@cite_17" ], "mid": [ "2944407464" ] }
Stochastic Optimization for Non-convex Inf-Projection Problems
0
1908.09941
2970931569
In this paper, we study a family of non-convex and possibly non-smooth inf-projection minimization problems, where the target objective function is equal to minimization of a joint function over another variable. This problem includes difference of convex (DC) functions and a family of bi-convex functions as special cases. We develop stochastic algorithms and establish their first-order convergence for finding a (nearly) stationary solution of the target non-convex function under different conditions of the component functions. To the best of our knowledge, this is the first work that comprehensively studies stochastic optimization of non-convex inf-projection minimization problems with provable convergence guarantee. Our algorithms enable efficient stochastic optimization of a family of non-decomposable DC functions and a family of bi-convex functions. To demonstrate the power of the proposed algorithms we consider an important application in variance-based regularization, and experiments verify the effectiveness of our inf-projection based formulation and the proposed stochastic algorithm in comparison with previous stochastic algorithms based on the min-max formulation for achieving the same effect.
To solve the above min-max formulation, @cite_6 proposed stochastic primal-dual algorithms based on the stochastic mirror prox methods proposed in for addressing convex-concave problems. When the loss function @math is non-convex (e.g., the hypothesis class is defined by deep neural networks), the resulting min-max problem is non-convex in terms of @math and but is concave in terms of @math . Recently, @cite_12 proposed new stochastic algorithms for solving the non-convex concave min-max problem when the objective function is weakly convex with respect to the minimization variable given the maximization variable. They proved the convergence to a nearly stationary point of the minimization objective function. However, the stochastic algorithms proposed in are not scalable due to updating and maintaining of the dual variable @math .
{ "abstract": [ "Min-max saddle-point problems have broad applications in many tasks in machine learning, e.g., distributionally robust learning, learning with non-decomposable loss, or learning with uncertain data. Although convex-concave saddle-point problems have been broadly studied with efficient algorithms and solid theories available, it remains a challenge to design provably efficient algorithms for non-convex saddle-point problems, especially when the objective function involves an expectation or a large-scale finite sum. Motivated by recent literature on non-convex non-smooth minimization, this paper studies a family of non-convex min-max problems where the minimization component is non-convex (weakly convex) and the maximization component is concave. We propose a proximally guided stochastic subgradient method and a proximally guided stochastic variance-reduced method for expected and finite-sum saddle-point problems, respectively. We establish the computation complexities of both methods for finding a nearly stationary point of the corresponding minimization problem.", "We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-, which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods." ], "cite_N": [ "@cite_12", "@cite_6" ], "mid": [ "2895628298", "2564590721" ] }
Stochastic Optimization for Non-convex Inf-Projection Problems
0
1908.09506
2969721933
When deploying autonomous agents in unstructured environments over sustained periods of time, adaptability and robustness oftentimes outweigh optimality as a primary consideration. In other words, safety and survivability constraints play a key role and in this paper, we present a novel, constraint-learning framework for control tasks built on the idea of constraints-driven control. However, since control policies that keep a dynamical agent within state constraints over infinite horizons are not always available, this work instead considers constraints that can be satisfied over a sufficiently long time horizon T > 0, which we refer to as limited-duration safety. Consequently, value function learning can be used as a tool to help us find limited-duration safe policies. We show that, in some applications, the existence of limited-duration safe policies is actually sufficient for long-duration autonomy. This idea is illustrated on a swarm of simulated robots that are tasked with covering a given area, but that sporadically need to abandon this task to charge batteries. We show how the battery-charging behavior naturally emerges as a result of the constraints. Additionally, using a cart-pole simulation environment, we show how a control policy can be efficiently transferred from the source task, balancing the pole, to the target task, moving the cart to one direction without letting the pole fall down.
Finding feasible control constraints that can be translated to a set of state constraints has been of particular interest both in the controls and machine learning communities. Early work includes the study of artificial potential functions in the context of obstacle avoidance, and the construction of so-called navigation functions was studied in @cite_3 . Alternatively, if there exists a control Lyapunov function @cite_39 , one can stabilize the agent while keeping it inside a level set of the function. Control Lyapunov functions can be learned through demonstrations @cite_56 , for example, and Lyapunov stability was also used in the safe reinforcement learning (see @cite_28 @cite_31 @cite_55 for example). As inverse optimality @cite_23 dictates that finding a stabilizing policy is equivalent to finding an optimal policy in terms of some cost function, these approaches can also be viewed as optimization-based techniques.
{ "abstract": [ "Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend control-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.", "", "A methodology for exact robot motion planning and control that unifies the purely kinematic path planning problem with the lower level feedback controller design is presented. Complete information about a freespace and goal is encoded in the form of a special artificial potential function, called a navigation function, that connects the kinematic planning problem with the dynamic execution problem in a provably correct fashion. The navigation function automatically gives rise to a bounded-torque feedback controller for the robot's actuators that guarantees collision-free motion and convergence to the destination from almost all initial free configurations. A formula for navigation functions that guide a point-mass robot in a generalized sphere world is developed. The simplest member of this family is a space obtained by puncturing a disk by an arbitrary number of smaller disjoint disks representing obstacles. The other spaces are obtained from this model by a suitable coordinate transformation. Simulation results for planar scenarios are provided. >", "We consider an imitation learning approach to model robot point-to-point (also known as discrete or reaching) movements with a set of autonomous Dynamical Systems (DS). Each DS model codes a behavior (such as reaching for a cup and swinging a golf club) at the kinematic level. An estimate of these DS models are usually obtained from a set of demonstrations of the task. When modeling robot discrete motions with DS, ensuring stability of the learned DS is a key requirement to provide a useful policy. In this paper we propose an imitation learning approach that exploits the power of Control Lyapunov Function (CLF) control scheme to ensure global asymptotic stability of nonlinear DS. Given a set of demonstrations of a task, our approach proceeds in three steps: (1) Learning a valid Lyapunov function from the demonstrations by solving a constrained optimization problem, (2) Using one of the-state-of-the-art regression techniques to model an (unstable) estimate of the motion from the demonstrations, and (3) Using (1) to ensure stability of (2) during the task execution via solving a constrained convex optimization problem. The proposed approach allows learning a larger set of robot motions compared to existing methods that are based on quadratic Lyapunov functions. Additionally, by using the CLF formalism, the problem of ensuring stability of DS motions becomes independent from the choice of regression method. Hence it allows the user to adopt the most appropriate technique based on the requirements of the task at hand without compromising stability. We evaluate our approach both in simulation and on the 7 degrees of freedom Barrett WAM arm. Proposing a new parameterization to model complex Lyapunov functions.Estimating task-oriented Lyapunov functions from demonstrations.Ensuring stability of nonlinear autonomous dynamical systems.Applicability to any smooth regression method.", "Abstract This note presents an explicit proof of the theorem - due to Artstein - which states that the existence of a smooth control-Lyapunov function implies smooth stabilizability. Moreover, the result is extended to the real-analytic and rational cases as well. The proof uses a ‘universal’ formula given by an algebraic function of Lie derivatives; this formula originates in the solution of a simple Riccati equation.", "The concept of a robust control Lyapunov function ( rclf ) is introduced, and it is shown that the existence of an rclf for a control-affine system is equivalent to robust stabilizability via continuous state feedback. This extends Artstein's theorem on nonlinear stabilizability to systems with disturbances. It is then shown that every rclf satisfies the steady-state Hamilton--Jacobi--Isaacs (HJI) equation associated with a meaningful game and that every member of a class of pointwise min-norm control laws is optimal for such a game. These control laws have desirable properties of optimality and can be computed directly from the rclf without solving the HJI equation for the upper value function.", "In many real-world reinforcement learning (RL) problems, besides optimizing the main objective function, an agent must concurrently avoid violating a number of constraints. In particular, besides optimizing performance it is crucial to guarantee the of an agent during training as well as deployment (e.g. a robot should avoid taking actions - exploratory or not - which irrevocably harm its hardware). To incorporate safety in RL, we derive algorithms under the framework of Constrained Markov decision problems (CMDPs), an extension of the standard Markov decision problems (MDPs) augmented with constraints on expected cumulative costs. Our approach hinges on a novel method. We define and present a method for constructing Lyapunov functions, which provide an effective way to guarantee the global safety of a behavior policy during training via a set of local, linear constraints. Leveraging these theoretical underpinnings, we show how to use the Lyapunov approach to systematically transform dynamic programming (DP) and RL algorithms into their safe counterparts. To illustrate their effectiveness, we evaluate these algorithms in several CMDP planning and decision-making tasks on a safety benchmark domain. Our results show that our proposed method significantly outperforms existing baselines in balancing constraint satisfaction and performance." ], "cite_N": [ "@cite_28", "@cite_55", "@cite_3", "@cite_56", "@cite_39", "@cite_23", "@cite_31" ], "mid": [ "2618318883", "2885381174", "2110144538", "1991667713", "2041076275", "1970832114", "2964340170" ] }
Constraint Learning for Control Tasks with Limited Duration Barrier Functions
Acquiring an optimal policy that attains the maximum return over some time horizon is of primary interest in the literature of both reinforcement learning [1][2][3] and optimal control [4]. A large number of algorithms have been designed to successfully control systems with complex dynamics to accomplish specific tasks, such as balancing an inverted pendulum and letting a humanoid robot run to a target location. Those algorithms may result in control strategies that are energy-efficient, take the shortest path to the goal, spend less time to accomplish the task, and sometimes outperform human beings in these senses (cf. [5]). As we can observe in the daily life, on the other hand, it is often difficult to attribute optimality to human M. Ohnishi is with the the RIKEN Center for Advanced Intelligence Project, Tokyo, Japan, and with the Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA (email: mohnishi@cs.washington.edu). G. Notomista is with the School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA (e-mail: g.notomista@gatech.edu). M. Sugiyama is with the RIKEN Center for Advanced Intelligence Project, Tokyo, Japan, and with the Department of Complexity Science and Engineering, the University of Tokyo (e-mail: sugi@k.u-tokyo.ac.jp). M. Egerstedt is with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA (e-mail: mag-nus@gatech.edu). behaviors, e.g., the behaviors are hardly the most efficient for any specific task (cf. [6]). Instead, humans are capable of generalizing the behaviors acquired through completing a certain task to deal with unseen situations. This fact casts a question of how one should design a learning algorithm that generalizes across tasks rather than focuses on a specific one. In this paper, we hypothesize that this can be achieved by letting the agents acquire a set of good enough policies when completing one task instead of finding a single optimal policy, and reuse this set for another task. Specifically, we consider safety, which refers to avoiding certain states, as useful information shared among different tasks, and we regard limited-duration safe policies as good enough policies. Our work is built on the idea of constraints-driven control [7,8], a methodology for controlling agents by telling them to satisfy constraints without specifying a single optimal path. If feasibility of the assigned constraints is guaranteed, this methodology avoids recomputing an optimal path when a new task is given but instead enables high-level compositions of constraints. However, state constraints are not always feasible and arbitrary compositions of constraints cannot be validated in general [9]. We tackle this feasibility issue by relaxing safety (i.e., forward invariance [10] of the set of safe states) to limited-duration safety, by which we mean satisfaction of safety over some finite time horizon T > 0 (see Figure I.1). For an agent starting from a certain subset of the safe region, one can always find a set of policies that render this agent safe up to some finite time. To guarantee limited-duration safety, we propose a limited duration control barrier function (LDCBF). The idea is based on local model-based control that constrains the instantaneous control input every time to restrict the growths of values of LDCBFs by solving a computationally inexpensive quadratic programming (QP) problem. To find an LDCBF, we make use of so-called global value function learning. More specifically, we assign a high cost to unsafe states and a lower cost to safe states, and learn the value function (or discounted infinite-horizon cost) associated with any given policy. Then, it is shown that the value function associated with any given policy is an LDCBF, i.e., a nonempty set of limited-duration safe policies can be obtained (Section IV-B). Contrary to the optimal control and Lyapunovbased approaches that only single out an optimal policy, our learning framework aims at learning a common set of policies that can be shared among different tasks. Thus, our framework can be contextualized within the so-called lifelong learning [11] and transfer learning [12] (or safe transfer; see Section V-B). The rest of this paper is organized as follows: Section II discusses the related work, including constraints-driven control and transfer learning, Section III presents notations, assumptions made in this paper, and some background knowledge. Subsequently, we present our main contributions and their applications in Section IV and Section V, respectively. In Section VI, we first validate LDCBFs on an existing control problem (see Section II). Then, our learning framework is applied to the cart-pole simulation environment in DeepMind Control Suite [13]; safety is defined as keeping the pole from falling down, and we use an LDCBF obtained after learning the balance task to facilitate learning the new task, namely, moving the cart without letting the pole fall down. III. PRELIMINARIES Throughout, R, R ≥0 and Z + are the sets of real numbers, nonnegative real numbers and positive integers, respectively. Let · R d := x, x R d be the norm induced by the inner product x, y R d := x T y for d-dimensional real vectors x, y ∈ R d , where (·) T stands for transposition. In this paper, we consider an agent with system dynamics described by an ordinary differential equation: dx dt = h(x(t), u(t)),(1) where x(t) ∈ R nx and u(t) ∈ U ⊂ R nu are the state and the instantaneous control input of dimensions n x , n u ∈ Z + , and h : R nx × U → R nx . Let X be the state space which is a compact subset of R nx . 1 In this work, we make the following assumptions. Assumption III.1. For any locally Lipschitz continuous policy φ : X → U, h is locally Lipschitz with respect to x. Assumption III.2. The control space U(⊂ R nu ) is a polyhedron. Given a policy φ : X → U and a discount factor β > 0, define the value function associated with the policy φ by V φ,β (x) := ∞ 0 e −βt (x(t))dt, β > 0, where (x(t)) is the immediate cost and x(t) is the trajectory starting from x(0) = x. When V φ,β is continuously differentiable over int(X ), namely, the interior of X , we obtain the Hamilton-Jacobi-Bellman (HJB) equation [44]: βV φ,β (x) = ∂V φ,β (x) ∂x h(x, φ(x)) + (x), x ∈ int(X ).(2) Now, if the immediate cost (x) is positive for all x ∈ X except for the equilibrium, and that a zero-cost state is globally asymptotically stable with the given policy φ : X → U, one can expect that the value function V φ,0 (x) := ∞ 0 (x(t))dt has finite values over X . In this case, the HJB equation becomesV φ,0 (x(t)) := dV φ,0 dt (x(t)) = − (x(t)), i.e., V φ,0 is decreasing over time. As such, it is straightforward to see that V φ,0 is a control Lyapunov function, i.e., there always exists a policy that satisfies the decrease condition for V φ,0 . However, there are two major limitations to this approach: i) one must assume that the agent stabilizes in a zero-cost state by the given policy φ, and ii) forward invariant sublevel sets of the control Lyapunov function usually become too conservative with respect to the given safe states. 2 To remedy these drawbacks, we present our major contribution in the next section. IV. CONSTRAINT LEARNING FOR CONTROL TASKS In this section, we propose limited duration control barrier functions (LDCBFs), and present their properties and a practical way to find an LDCBF. A. Limited Duration Control Barrier Functions Before formally presenting LDCBFs, we give the following definition. Definition IV.1 (Limited-duration safety). Given an open set of safe states C, let B T LD be a closed nonempty subset of C ⊂ X . The dynamical system (1) is said to be safe up to time T , if there exists a policy φ that ensures x(t) ∈ C for all 0 ≤ t < T whenever x(0) ∈ B T LD . Now, we give the definition of the LDCBFs. Definition IV.2. Let a function B LD : X → R ≥0 be continuously differentiable. Suppose that h(x, u) = f (x) + g(x) u, x ∈ X , u ∈ U, and that the set of safe states is given by C := x ∈ X : B LD (x) < L β , L > 0, β > 0.(3) Define the set B T LD = x ∈ X : B LD (x) ≤ Le −βT β ⊂ C, for some T > 0. Define also L f and L g as the Lie derivatives along f and g. Then, B LD is called a limited duration control barrier function for C and for the time horizon T if B T LD is nonempty and if there exists a monotonically increasing locally Lipschitz continuous function 3 α : R → R such that α(0) = 0 and inf u∈U {L f B LD (x) + L g B LD (x)u} ≤ α Le −βT β − B LD (x) + βB LD (x),(4) for all x ∈ C. Given an LDCBF, the admissible control space S T LD (x), x ∈ C, can be defined as S T LD (x) := {u ∈ U : L f B LD (x) + L g B LD (x)u ≤ α Le −βT β − B LD (x) + βB LD (x)}.(5) Given an LDCBF, safety up to time T is guaranteed if the initial state is taken in B T LD and an admissible control is employed as the following theorem claims. Theorem IV.1. Given a set of safe states C defined by (3) and an LDCBF B LD defined on X under Assumption III.1, any locally Lipschitz continuous policy φ : X → U that satisfies φ(x) ∈ S T LD (x), ∀x ∈ C, renders the dynamical system (1) safe up to time T whenever the initial state is in B T LD . Proof. See Appendix A. When h(x, u) = f (x) + g(x)u, x ∈ X , u ∈ U, one can constrain the control input within the admissible control space S T LD (x), x ∈ C, using a locally accurate model via QPs in the same manner as control barrier functions and control Lyapunov functions. Here, we present a general form of control syntheses via QPs. Proposition IV.1. Suppose that h(x, u) = f (x) + g(x)u, x ∈ X , u ∈ U. Given an LDCBF B LD with a locally Lipschitz derivative and the admissible control space S T LD (x * ) at x * ∈ C defined in (5), consider the QP: φ(x * ) = argmin u∈S T LD (x) u T H(x * )u + 2b(x * ) T u,(6) where H and b are Lipschitz continuous at x * ∈ C, and H(x * ) = H T (x * ) is positive definite. If the width 4 of a feasible set is strictly larger than zero, then, under Assumption III.2, the policy φ(x) defined in (6) is unique and Lipschitz continuous with respect to the state at x * . Proof. Slight modifications of [45, Theorem 1] proves the proposition. To see an advantage of considering LDCBFs, we show that an LDCBF can be obtained systematically as described next. B. Finding a Limited Duration Control Barrier Function We present a possible way to find an LDCBF B LD for the set of safe states through global value function learning. Let (x) ≥ 0, ∀x ∈ X , and suppose that the set of safe states is given by C := {x ∈ X : (x) < L} , L > 0.(7) Given the dynamical system defined in Definition IV.2, consider the virtual systeṁ x(t) = f (x(t)) + g(x(t))φ(x(t)) if x(t) ∈ C otherwise 0,(8) for a policy φ. Assume that we employ a continuously differentiable function approximator to approximate the value function V φ,β for the virtual system, and letV φ,β denote an approximation of V φ,β . By using the HJB equation (2), define the estimated immediate cost functionˆ aŝ (x) = βV φ,β (x) − L fV φ,β (x)−L gV φ,β (x)φ(x), ∀x ∈ C. Select c ≥ 0 so thatˆ c (x) :=ˆ (x) + c ≥ 0 for all x ∈ C, and define the functionV φ,β c (x) :=V φ,β (x) + c β . Then, the following theorem holds. Theorem IV.2. Consider the set B T LD = x ∈ X :V φ,β c (x) ≤L e −βT β , whereL := inf y ∈ X \ C βV φ,β c (y). IfB T LD is nonempty, then the dynamical system starting from the initial state inB T LD is safe up to time T when the policy φ is employed, andV φ,β c (x) is an LDCBF for the set C := x ∈ X :V φ,β c (x) <L β ⊂ C. Proof. See Appendix C. Remark IV.1. We need to considerV φ,β c instead of V φ,β because the immediate cost function and the virtual system (8) need to be sufficiently smooth to guarantee that V φ,β is continuously differentiable. In practice, the choice of c andL affects conservativeness of the set of safe states. Note, to enlarge the set B T LD , the immediate cost (x) is preferred to be close to zero for x ∈ C, and L needs to be sufficiently large. Example IV.1. As an example of finding an LDCBF, we use a deep neural network. Suppose the discrete-time transition is given by (x n , u n , (x n ), x n+1 ), where n ∈ 0 ∪ Z + is the time instant. Then, by executing a given policy φ, we store the negative data, where x n+1 / ∈ C, and the positive data, where x n+1 ∈ C, separately, and conduct prioritized experience replay [46,47]. Specifically, initialize a target networkV φ,β Target and a local networkV φ,β Local , and update the local network by sampling a random minibatch of N negative and positive transitions {(x ni , u ni , (x ni ), x ni+1 )} i∈{1,2,...,N } to minimize 1 N N i=1 (y ni −V φ,β Local (x ni )), where y ni = (x ni ) + γV φ,β Target (x ni+1 ) x ni+1 ∈ C, L 1−γ x ni+1 / ∈ C. Here, γ ≈ − log (β)/∆ t is a discount factor for a discretetime case, where ∆ t is the time interval of one time step. The target network is soft-updated using the local network byV φ,β Target ← µV φ,β Local + (1 − µ)V φ,β Target for µ 1. One can transform the learned local network to a continuous-time form via multiplying it by ∆ t . Although we cannot ensure forward invariance of the set C using LDCBFs, the proposed approach is still set theoretic. As such, we can consider the compositions of LDCBFs. C. Compositions of Limited Duration Barrier Functions The Boolean compositional CBFs were studied in [23,48]. In [23], max and min operators were used for the Boolean operations, and nonsmooth barrier functions were proposed out of necessity. However, it is known that, even if two sets C 1 ⊂ X and C 2 ⊂ X are controlled invariant [49, page 21] for the dynamical system (1), the set C 1 ∩ C 2 is not necessarily controlled invariant, while C 1 ∪C 2 is indeed controlled invariant [49, Proposition 4.13]. We can make a similar assertion for limited-duration safety as follows. Proposition IV.2. Assume that there exists a limited-duration safe policy for each set of safe states C j ⊂ X , j ∈ {1, 2, . . . , J}, J ∈ Z + , that renders an agent with the dynamical system (1) safe up to time T whenever starting from inside a closed nonempty set B LDj ⊂ C j . Then, given the set of safe states C := J j=1 C j , there also exists a policy rendering the dynamical system safe up to time T whenever starting from any state in B LD := J j=1 B LDj . Proof. A limited-duration safe policy for C j also keeps the agent inside the set C up to time T when starting from inside B LDj . If there exist LDCBFs for C j s, it is natural to ask if there exists an LDCBF for C. Because of the nonsmoothness stemming from Boolean compositions, however, obtaining an LDCBF for C requires an additional learning in general (see Appendix F). Also, existence of an LDCBF for the intersection of multiple sets of safe states is not guaranteed, and we need an additional learning as well. So far, we have seen a possible way to obtain an LDCBF for a given set of safe states expressed as in (7). As our approach is set-theoretic rather than specifying a single optimal policy, it is also compatible with the constraints-driven control and transfer learning as described in the next section. V. APPLICATIONS In this section, we present two practical examples that illustrate benefits of considering LDCBFs, namely, long-duration autonomy and transfer learning. A. Applications to Long-duration Autonomy In many applications, guaranteeing particular properties (e.g., forward invariance) over an infinite-time horizon is difficult or some forms of approximations are required. Specifically, when a function approximator is employed, there will certainly be an approximation error. Nevertheless, it is often sufficient to guarantee safety up to certain finite time, and our proposed LDCBFs act as useful relaxations of CBFs. To see that one can still achieve long-duration autonomy by using LDCBFs, we consider the settings of work in [27]. Suppose that the state x := [E, p T ] T ∈ R 3 has the information of energy level E ∈ R ≥0 and the position p ∈ R 2 of an agent. Suppose also that E max > 0 is the maximum energy level and ρ(p) ≥ 0 (equality holds only when the agent is at a charging station) is the energy required to bring the agent to a charging station from the position p ∈ R 2 . Then, although we emphasize that we can obtain an LDCBF by value function learning if necessary, let us assume that the function B LD (x) := E max − E + ρ(p) ≥ 0 is an LDCBF, for simplicity. Then, by letting L = β(E max − E min ) for some β > 0 and for the minimum necessary energy level E min , 0 ≤ E min < E max , the set of safe states can be given by C := x ∈ X : B LD (x) < L β . Now, under these settings, the following proposition holds. Proposition V.1. Assume that the energy dynamics is lower bounded by dE dt ≥ −K d , ∃K d > 0, which implies that the least exit timeT energy (E) of E being below E min iŝ T energy (E) = (E − E min ) K d . Also, suppose we employ a locally Lipschitz continuous policy φ that satisfies the LDCBF condition using a battery dynamics model dÊ dt = −K d . Then, by taking T >T energy (E 0 ) for the initial energy level E 0 > E min and letting B T LD := x ∈ X : B LD (x) ≤ Le −βT β ⊂ C, the agent starting from a state in B T LD will reach the charging station before the energy reaches to E min . Proof. See Appendix D. Hence, LDCBFs are shown to be applicable to some cases where it is difficult to guarantee that certain properties hold over infinite horizons, but where limited-duration safety suffices. One of the benefits of using LDCBFs is that, once a set of limited-duration safe policies or good enough policies is obtained, one can reuse them for different tasks. Therefore, given that we can obtain an LDCBF through global value function learning for a policy that is not necessarily stabilizing the system, it is natural to ask if one can employ LDCBFs to transfer knowledge. Indeed, LDCBFs also have good compatibility with transfer learning (or safe transfer) as discussed below. B. Applications to Transfer Learning Given the immediate cost function, reinforcement learning aims at finding an optimal policy. Clearly, the obtained policy is not optimal with respect to a different immediate cost function in general. Therefore, employing the obtained policy straightforwardly in different tasks makes no sense. However, it is quite likely that the obtained policy is good enough even in different tasks because some mandatory constraints such as avoiding unsafe regions of the state space are usually shared among different tasks. Therefore we wish to exploit constraint learning for the sake of transfer learning, i.e., we learn constraints which are common to the target task while learning source tasks. Definition V.1 (Transfer learning, [12, modified version of Definition 1]). Given a set of training data D S for one task (i.e., source task) denoted by T S (e.g., an MDP) and a set of training data D T for another task (i.e., target task) denoted by T T , transfer learning aims to improve the learning of the target predictive function f T (i.e., a policy in our example) in D T using the knowledge in D S and T S , where D S = D T , or T S = T T . For example, when learning an optimal policy for the balance task of the cart-pole problem, one can simultaneously learn a set of limited-duration safe policies that keep the pole from falling down up to certain time T > 0. The set of these limited-duration safe policies is obviously useful for other tasks such as moving the cart to one direction without letting the pole fall down. Here, we present a possible application of limited duration control barrier functions to transfer learning. We take the following steps: 1) Design J ∈ Z + cost functions j , j ∈ {1, 2, . . . , J}, each of which represents a constraint by defining a set of safe state C j := {x ∈ X : j (x) < L} , L > 0, j ∈ {1, 2, . . . , J}. 2) Conduct reinforcement learning for a cost function by using any of the reinforcement learning techniques. 3) No matter the currently obtained policy φ is optimal or not, one can obtain an LDCBF B LDj for each set C j , j ∈ {1, 2, . . . , J}. More specifically, B LDj is given by B LDj (x) := V φ,β j (x) := ∞ 0 e −βt j (x(t))dt, j ∈ {1, 2, . . . , J}, for x(0) = x. 4) When learning a new task, policies are constrained by LDCBFs depending on which constraints are common to the new task. We study some practical implementations. Given LDCBFs B LDj , j ∈ {1, 2, ..., J}, define the set Φ T j of admissible policies as Φ T j := {φ :φ(x) ∈ S T LDj (x), ∀x ∈ C j } ⊂ Φ := {φ : φ(x) ∈ U, ∀x ∈ X }, where S T LDj (x) is the set of admissible control inputs at x for the jth constraint. If an optimal policy φ T T for the target task T T is included in Φ T j , one can conduct learning for the target task within the policy space Φ T j . If not, one can still consider Φ T j as a soft constraint and can explore the policy space Φ \ Φ T j with a given probability or can just select the initial policy from Φ T j . In practice, a parametrized policy is usually considered; a policy φ θ expressed by a parameter θ is updated via policy gradient methods [50]. If the policy is in the linear form with a fixed feature vector, the projected policy gradient method [51] can be used. Thanks to the fact that an LDCBF defines an affine constraint on instantaneous control inputs if the system dynamics is affine in control, the projected policy gradient method looks like θ ← Γ j [θ + λ∇ θ F T T (θ)]. Here, Γ j : Φ → Φ T j projects a policy onto the affine constraint defined by the jth constraint and F T T (θ) is the objective function for the target task which is to be maximized. For the policy not in the linear form, one may update policies based on LDCBFs by modifying the deep deterministic policy gradient (DDPG) method [52]: because through LDCBFs, the global property (i.e., limited-duration safety) is ensured by constraining local control inputs, it suffices to add penalty terms to the cost when updating a policy using samples. For example, one may employ the log-barrier extension proposed in [53], which is a smooth approximation of the hard indicator function for inequality constraints but is not restricted to feasible points. VI. EXPERIMENTS In this section, we validate our learning framework. First, we show that LDCBFs indeed work for the constraints-driven control problem considered in Section V-A by simulation. Then, we apply LDCBFs to a transfer learning problem for the cart-pole simulation environment. A. Constraints-driven coverage control of multi-agent systems Let the parameters be E max = 1.0, E min = 0.55, K d = 0.01, β = 0.005 and T = 50.0 > 45.0 = (E max − E min )/K d . We consider six agents (robots) with single integrator dynamics. An agent of the position p i := [x i , y i ] T is assigned a charging station of the position p charge,i , where x and y are the X position and the Y position, respectively. When the agent is close to the station (i.e., p i − p charge,i R 2 ≤ 0.05), it remains there until the battery is charged to E ch = 0.92. Actual battery dynamics is given by dE/dt = −0.01E. The coverage control task is encoded as Lloyd's algorithm [54] aiming at converging to the Centroidal Voronoi Tesselation, but with a soft margin so that the agent prioritizes the safety constraint. The locational cost used for the coverage control task is given as follows [55]: MATLAB simulation (the simulator is provided on the Robotarium [56] website: www.robotarium.org), we used the random seed rng(5) for determining the initial states. Note, for every agent, the energy level and the position are set so that it starts from inside the set B T LD . Figure VI.1 shows (a) the images of six agents executing coverage tasks and (b) images of the agents three of which are charging their batteries. Figure VI-B shows the simulated battery voltage data of the six agents, from which we can observe that LDCBFs worked effectively for the swarm of agents to avoid depleting their batteries. 6 i=1 Vi(p) p i −p 2 ϕ(p)dp, where V i (p) = {p ∈ X : p i −p ≤ p j −p , ∀j = i} is the Voronoi cell for the agent i. In particular, we used ϕ([x,ŷ] T ) = e −{(x−0.2) 2 +(ŷ−0.3) 2 }/0.06 + 0.5e −{(x+0.2) 2 +(ŷ+0.1) 2 }/0.03 . In B. Transfer from Balance to Move: Cart-pole problem Next, we apply LDCBFs to transfer learning. The simulation environment and the deep learning framework used in this experiment are "Cart-pole" in DeepMind Control Suite and PyTorch [57], respectively. We take the following steps: 1) Learn a policy that successfully balances the pole by using DDPG [52]. 2) Learn an LDCBF by using the obtained actor network. 3) Try a random policy with the learned LDCBF and a (locally) accurate model to see that LDCBF works reasonably. 4) With and without the learned LDCBF, learn a policy that moves the cart to left without letting the pole fall down, which we refer to as move-the-pole task. The parameters used for this experiment are summarized in Table VI.1. Here, angle threshold stands for the threshold of cos ψ where ψ is the angle of the pole from the standing position, and position threshold is the threshold of the cart position p. The angle threshold and the position threshold are used to terminate an episode. Note that the cart-pole environment of MuJoCo [58] xml data in DeepMind Control Suite is modified so that the cart can move between −3.8 and 3.8. As presented in Example IV.1, we use prioritized experience replay when learning an LDCBF. Specifically, we store the positive and the negative data, and sample 4 data points from the positive one and the remaining 60 data points from the negative one. In this experiment, actor, critic and LDCBF networks use ReLU nonlinearities. The actor network and the LDCBF network consist of two layers of 300 → 200 units, and the critic network is of two layers of 400 → 300 units. The control input vector is concatenated to the state vector from the second critic layer. Step1: The average duration (i.e., the first exit time, namely, the time when the pole first falls down) out of 10 seconds (corresponding to 1000 time steps) over 10 trials for the policy learned through the balance task by DDPG was 10 seconds. Step2: Then, by using this successfully learned policy, an LDCBF is learned by assigning the cost (x) = 1.0 for cos ψ < 0.2 and (x) = 0.1 elsewhere. Also, because the LDCBF is learned in a discrete-time form, we transform it to a continuous-time form via multiplying it by ∆ t = 0.01. When learning an LDCBF, we initialize each episode as follows: the angle ψ is uniformly sampled within −1.5 ≤ ψ ≤ 1.5, the cart velocityṗ is multiplied by 100 and the angular velocityψ is multiplied by 200 after being initialized by DeepMind Control Suite. The LDCBF learned by using this policy is illustrated in Figure VI Step3: To test this LDCBF, we use a uniformly random policy (φ(x) takes the value between −1 and 1) constrained by the LDCBF with the function α(q) = max {0.1q, 0} and with the time constant T = 5.0. When imposing constraints, we use the (locally accurate) control-affine model of the cart-pole in the work [59], where we replace the friction parameters by zeros for simplicity. The average duration out of 10 seconds over 10 trials for this random policy was 10 seconds, which indicates that the LDCBF worked sufficiently well. We also tried this LDCBF with the function α(q) = max {3.0q, 0} and T = 5.0, which resulted in the average duration of 5.58 seconds. Moreover, we tried the fixed policy φ(x) = 1.0, ∀x ∈ X , with the function α(q) = max {0.1q, 0} and T = 5.0, and the average duration was 4.73 seconds, which was sufficiently close to T = 5.0. Step4: For the move-the-pole task, we define the success by the situation where the cart position p, −3.8 ≤ p ≤ 3.8, ends up in the region of p ≤ −1.8 without letting the pole fall down. The angle ψ is uniformly sampled within −0.5 ≤ ψ ≤ 0.5 and the rest follow the initialization of DeepMind Control Suite. The reward is given by (1 + cos ψ)/2×(utils.rewards.tolerance(ṗ + 1.0, bounds = (−2.0, 0.0), margin = 0.5)), where utils.rewards.tolerance is the function defined in [13]. In other words, to move the pole to left, we give high rewards when the cart velocity is negative and the pole is standing up. To use the learned LDCBF for DDPG, we store matrices and vectors used in linear constraints along with other variables such as control inputs and states, which are then used for experience replay. Then, the logbarrier extension cost proposed in [53] is added when updating policies. Also, we try DDPG without using LDCBF for the move-the-pole task. Both approaches initialize the policy by the one obtained after the balance task. The average success rates of the policies obtained after the numbers of episodes up to 15 over 10 trials are given in Table VI.2 for DDPG with the learned LDCBF and DDPG without LDCBF. This result implies that our proposed approach successfully transferred information from the source task to the target task by sharing a common safety constraint. VII. CONCLUSION In this paper, we presented a notion of limited-duration safety as a relaxation of forward-invariance of a set of safe states. Then, we proposed limited-duration control barrier functions that are used to guarantee limited-duration safety by using locally accurate model of agent dynamics. We showed that LDCBFs can be obtained through global value function learning, and analyzed some of their properties. LDCBFs were validated through persistent coverage control tasks and were successfully applied to a transfer learning problem via sharing a common state constraint. APPENDIX A PROOF OF THEOREM IV.1 Under Assumption III.1, the trajectories x(t) with an initial condition x(0) ∈ B T LD exist and are unique over t ≥ 0. Let the first time at which the trajectory x(t) exits C be T e > 0 and let T p , 0 ≤ T p < T e , denote the last time at which the trajectory x(t) passes through the boundary of B T LD from inside before first exiting C. Since α is locally Lipschitz continuous and B LD is continuously differentiable, the right hand side of (4) is locally Lipschitz continuous. Thus solutions to the differential equatioṅ r(t) = α Le −βT β − r(t) + βr(t), where the initial condition is given by r(T p ) = B LD (x(T p )), exist and are unique for all t, T p ≤ t ≤ T e . On the other hand, the solution tȯ s(t) = βs(t), where the initial condition is given by s( T p ) = B LD (x(T p )) = Le −βT β , is s(t) = B LD (x(T p ))e β(t−Tp) , ∀t ≥ T p . It thus follows that s(T p + T ) = L β e −βT e βT = L β , and T p +T is the first time at which the trajectory s(t), t ≥ T p , exits C. Because α( Le −βT β − r(t)) ≤ 0, T p ≤ t ≤ T e , we obtain, by the Comparison Lemma [10], [60,Theorem 1.10.2], B LD (x(t)) ≤ r(t) ≤ s(t) for all t, T p ≤ t ≤ T e . If we assume T e < T p +T , it contradicts the fact that B LD (x(T e )) ≤ s(T e ) < s(T p + T ) = L β , and hence T e ≥ T p + T . Therefore, we conclude that any Lipschitz continuous policy φ : X → U such that φ(x) ∈ S T LD (x), ∀x ∈ C, renders the dynamical system safe up to time T p + T (≥ T ) whenever the initial state x(0) is in B T LD . APPENDIX B ON PROPOSITION IV.1 The width of a feasible set is defined as the unique solution to the following linear program: u ω (x) = max [u T ,ω] T ∈R nu+1 ω (B.1) s.t. L f B LD (x) + L g B LD (x)u + ω ≤ α Le −βT β − B LD (x) + βB LD (x) u + [ω, ω . . . , ω] T ∈ U APPENDIX C PROOF OF THEOREM IV.2 Because, by definition, V φ,β c (x) ≥L β , ∀x ∈ X \ C, it follows that C = x ∈ X :V φ,β c (x) <L β ⊂ C. Because the continuously differentiable functionV φ,β c (x) sat- isfiesV φ,β c (x) = L fV φ,β c (x) + L gV φ,β c (x)φ(x) = βV φ,β c (x) −ˆ c (x), ∀x ∈ C, andˆ c (x) ≥ 0, ∀x ∈ C, there exists at least one policy φ that satisfies φ(x) ∈ S T LD (x) = {u ∈ U : L fV φ,β c (x) + L gV φ,β c (x)u ≤ α L e −βT β −V φ,β c (x) + βV φ,β c (x)}, for all x ∈ C and for a monotonically increasing locally Lipschitz continuous function α such that α(q) = 0, ∀q ≤ 0. Therefore,V φ,β c is an LDCBF for the setĈ. Remark C.1. A sufficiently large constant c could be chosen in practice. Ifˆ c (x) > 0 for all x ∈ C and the value function is learned by using a policy φ such that φ(x)+[c φ , c φ . . . , c φ ] T ∈ U for some c φ > 0, then the unique solution to the linear program (B.1) satisfies u ω (x) > 0 for any x ∈ C. APPENDIX D PROOF OF PROPOSITION V.1 Under Assumption III.1, the trajectories x(t) with an initial condition x(0) ∈ B T LD satisfying the given LDCBF condition (that uses the battery dynamics model dÊ/dt = −K d ) exist and are unique over t ≥ 0. Let the actual exit time of E being below E min be T energy > 0, and let the first time at which the trajectory x(t) exits C be T e > 0 (T e = ∞ if the agent never exits C). Note it holds that T e ≤ T energy . Also, let E t and ρ t be the trajectories of E and ρ(p). Definê T e := min T e , inf {t : ρ t = 0 ∧ x(t) / ∈ B T LD } , and T p := max t : x(t) ∈ ∂B T LD ∧ t ≤T e , where ∂B T LD denotes the boundary of B T LD . Now, if E t = E min and ρ t > 0 for some t < T p , it contradicts the fact that B LD (x(t)) = E max − E t + ρ t < E max − E min (∵ x(t) ∈ C, ∀t < T p ) . Therefore, it follows that E t > E min or ρ t = 0 for all t < T p . This implies that we should only consider the time t ≥ T p by assuming that ρ Tp ≥ 0. LetÊ t be the trajectory following the virtual battery dynamics dÊ/dt = −K d andÊ Tp = E Tp , and let s(t) be the unique solution tȯ s(t) = βs(t), t ≥ T p , where s(T p ) = B LD (x(T p )) = (E max − E min )e −βT . Also, let (t) = s(t) +Ê t − E max , t ≥ T p . Then, the time at which s(t) becomes E max − E min is T p + T because s(T + T p ) = B LD (x(T p ))e β(T +Tp−Tp) = (E max − E min )e −βT e β(T +Tp−Tp) = E max − E min . Sincê T energy (E Tp ) ≤T energy (E 0 ) < T and (t) = B LD (x(T p ))e β(t−Tp) +Ê Tp − K d (t − T p ) − E max , we obtain T 0 := inf {t : (t) = 0 ∧ t ≥ T p } ≤ T p +T energy (E Tp ). On the other hand, the actual battery dynamics can be written as dE/dt = −K d +∆(x), where ∆(x) ≥ 0. Therefore, we see that the trajectory x(t) satisfies dB LD (x) dt ≤ βB LD (x) − ∆(x), ∀t, T p ≤ t ≤T e . Then, because d (B LD (x(t)) − s(t)) dt ≤ β (B LD (x(t)) − s(t)) − ∆(x(t)) ≤ β (B LD (x(t)) − s(t)) , ∀t, T p ≤ t ≤T e , and β (B LD (x(T p )) − s(T p )) = 0, we obtain B LD (x(t)) − s(t) ≤ − t 0 ∆(x(t))dt, ∀t, T p ≤ t ≤T e . Also, it is straightforward to see that T energy ≥ T e ≥ T p + T ≥ T p +T energy (E Tp ). From the definitions of B LD and (t), it follows that ρ t − (t) = B LD (x(T p )) − s(t) + E t −Ê t ≤ − t 0 ∆(x(t))dt + t 0 ∆(x(t))dt = 0, ∀t, T p ≤ t ≤T e , which leads to the inequality ρ t ≤ (t) for all t, T p ≤ t ≤T e . Hence, we conclude that T := inf {t : ρ t = 0 ∧ t ≥ T p } ≤T 0 ≤ T p +T energy (E Tp ) ≤ T energy , andT e =T , which proves the proposition. APPENDIX E STOCHASTIC LIMITED DURATION CONTROL BARRIER FUNCTIONS For the system dynamics described by a stochastic differential equation: dx = h(x(t), u(t))dt + η(x(t))dw, (E.1) where h : R nx × U → R nx is the drift, η : R nx → R nx×nw is the diffusion, w is a Brownian motion of dimension n w ∈ Z + , it is often impossible to guarantee safety with probability one without making specific assumptions on the dynamics. Therefore, we instead consider an upper bound on the probability that a trajectory escape from the set of safe states within a given finite time. We give the formal definition below. Definition E.1 (Limited-duration safety for stochastic systems). Let B T,δ SLD be a closed nonempty subset of a open set of safe states C S and τ the first exit time τ := inf{t : x(t) = C S }. Then, the stopped processx(t) defined bỹ x(t) := x(t), t < τ, x(τ ), t ≥ τ, (E.2) where x(t) evolves by (E.1), is safe up to time T > 0 with probability δ, 0 ≤ δ ≤ 1, if there exists a policy φ which ensures that P x(t) = C S for some t, 0 ≤ t ≤ T : x(0) ∈ B T,δ SLD ≤ 1 − δ. To present stochastic limited duration control barrier functions (SLDCBFs) that are stochastic counterparts of LDCBFs, we define the infinitesimal generator G, for a function B SLD : X → R ≥0 , by G(B SLD )(x) := − 1 2 tr ∂ 2 B SLD (x) ∂x 2 η(x)η(x) T − ∂B SLD (x) ∂x h(x, φ(x)), x ∈ int(X ), where tr stands for the trace. Also, we make the following assumption. Then, the following theorem holds. Theorem E.1. Given T > 0 and δ, 0 ≤ δ ≤ 1, define a set of safe states C S := x ∈ X : B SLD (x) < L β , L > 0, β > 0, for a twice continuously differentiable function B SLD : X → R ≥0 . Define also the set B T,δ SLD as B T,δ SLD := x ∈ X : B SLD (x) ≤ (1 − δ) Le −βT β ⊂ C S . If B T,δ SLD is nonempty and if there exists a Lipschitz continuous policy φ : X → U satisfying φ(x) ∈ S T SLD := {u ∈ U : −G(B SLD )(x) ≤ βB SLD (x)}, for all x ∈ C S , then, under Assumption E.1, the policy φ renders the stopped processx(t) in (E.2) safe up to time T with probability δ. Proof. DefineB SLD : R ≥0 → R ≥0 as B SLD (t) := e −βt B SLD (x(t)) . Becausex(t) is an E x B SLD (x(t)) −B SLD (x(0)) = E x    t 0 −e −βt [G(B SLD ) + βB SLD ] (x(s)) ≤0 ds    for any t, 0 ≤ t < ∞, from which it follows that E x B SLD (x(t)) ≤B SLD (x(0)). Therefore,B SLD is a supermartingale with respect to the filtration {M t : t ≥ 0} generated byx(t) because B SLD (x(t)) is twice continuously differentiable and C S is bounded. We thus obtain [62, p.25] P sup 0 ≤ t ≤ T B SLD (x(t)) ≥ L β ≤ P sup 0 ≤ t ≤ TB SLD (t) ≥ Le −βT β ≤ βe βTB SLD (0) L = βe βT B SLD (x(0)) L ≤ 1 − δ. To make a claim similar to Theorem IV.2, we consider the following value function associated with the policy φ: V φ,β (x) := E x ∞ 0 e −βt (x(t))dt, β ≥ 0, where (x(t)) is the immediate cost and E x is the expectation for all trajectories (time evolutions of x(t)) starting from x = x(0). When V φ,β is twice continuously differentiable over int(X ), we obtain the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation [44]: βV φ,β (x) = −G(V φ,β )(x) + (x), x ∈ int(X ). (E.3) Given the set of safe states C S := {x ∈ X : (x) < L} , L > 0, for (x) ≥ 0, ∀x ∈ X , and the stopped process (E.2), assume that we employ a twice continuously differentiable function approximator to approximate the value function V φ,β for the stopped process, and letV φ,β denote the approximation of V φ,β . By using the HJBI equation (E.3), define the estimated immediate cost functionˆ aŝ (x) = βV φ,β (x) + G(V φ,β )(x), ∀x ∈ C S . Select c ≥ 0 so thatˆ c (x) :=ˆ (x) + c ≥ 0 for all x ∈ C S , and define the functionV φ,β c (x) :=V φ,β (x) + c β . Now, the following theorem holds. The function B LD1 ∨B LD2 is, however, nonsmooth in general. Therefore, even if we consider differential inclusion and associated Carathéodory solutions as in [23,63], there might exist sliding modes that violate inequalities imposed by a function B LD1 ∨B LD2 at x ∈ Ω f g+φ . Here, Ω f +gφ represents the zeromeasure set where the dynamics is nondifferentiable (see [64] for detailed arguments, for example). Nevertheless, to obtain a smooth LDCBF for C, we may obtain a smooth approximation of a possibly discontinuous policy φ that satisfies dB LDj (x * ) dt ≤ α Le −βT β − B LDj (x * ) + βB LDj (x * ), for all x * such that B LD1 (x * ) < B LD2 (x * ) for j = 1 and B LD2 (x * ) < B LD1 (x * ) for j = 2. Then, we can conduct value function learning to obtain an LDCBF for C with a set B T LD that is possibly smaller than B T LD .
7,890
1908.09506
2969721933
When deploying autonomous agents in unstructured environments over sustained periods of time, adaptability and robustness oftentimes outweigh optimality as a primary consideration. In other words, safety and survivability constraints play a key role and in this paper, we present a novel, constraint-learning framework for control tasks built on the idea of constraints-driven control. However, since control policies that keep a dynamical agent within state constraints over infinite horizons are not always available, this work instead considers constraints that can be satisfied over a sufficiently long time horizon T > 0, which we refer to as limited-duration safety. Consequently, value function learning can be used as a tool to help us find limited-duration safe policies. We show that, in some applications, the existence of limited-duration safe policies is actually sufficient for long-duration autonomy. This idea is illustrated on a swarm of simulated robots that are tasked with covering a given area, but that sporadically need to abandon this task to charge batteries. We show how the battery-charging behavior naturally emerges as a result of the constraints. Additionally, using a cart-pole simulation environment, we show how a control policy can be efficiently transferred from the source task, balancing the pole, to the target task, moving the cart to one direction without letting the pole fall down.
On the other hand, control barrier functions (CBFs) @cite_45 @cite_17 @cite_1 @cite_8 @cite_46 @cite_43 @cite_12 @cite_15 @cite_32 @cite_7 were proposed to guarantee that an agent remains in a certain region of the state space (i.e., forward invariance @cite_42 ) by using a locally accurate model of the agent dynamics (i.e., a model that accurately predicts a time derivative of the state at the current state and control input). When the system is linearizable and has a high relative degree, an exponential control barrier function @cite_41 was proposed and was applied to control of quadrotors @cite_49 . When a Lyapunov function is available, the work @cite_14 proposed a sum-of-squares approach to compute a valid barrier function. The idea of constraints-driven controls is in stark contrast to solving the task-specific problem that basically aims at singling out one optimal trajectory. However, although there exist converse theorems for safety and barrier functions which claim that a forward invariant set has a barrier function under certain conditions @cite_29 @cite_21 @cite_46 , finding such a set without assuming stability of the system is difficult in general (see @cite_45 for the conditions that a candidate barrier function can be a valid one).
{ "abstract": [ "Motivated by the need to simultaneously guarantee safety and stability of safety-critical dynamical systems, we construct permissive barrier certificates in this paper that explicitly maximize the region where the system can be stabilized without violating safety constraints. An iterative search algorithm is developed to search for the maximum volume barrier certified region of safe stabilization. The barrier certified region, which is allowed to take any arbitrary shape, is proved to be strictly larger than safe regions generated with Lyapunov sublevel set based methods. The proposed approach effectively unites a Lyapunov function with multiple barrier functions that might not be compatible with each other. Simulation results of the iterative search algorithm demonstrate the effectiveness of the proposed method.", "", "This paper presents safety barrier certificates that ensure scalable and provably collision-free behaviors in multirobot systems by modifying the nominal controllers to formally satisfy safety constraints. This is achieved by minimizing the difference between the actual and the nominal controllers subject to safety constraints. The resulting computation of the safety controllers is done through a quadratic programming problem that can be solved in real-time and in this paper, we describe a series of problems of increasing complexity. Starting with a centralized formulation, where the safety controller is computed across all agents simultaneously, we show how one can achieve a natural decentralization whereby individual robots only have to remain safe relative to nearby robots. Conservativeness and existence of solutions as well as deadlock-avoidance are then addressed using a mixture of relaxed control barrier functions, hybrid braking controllers, and consistent perturbations. The resulting control strategy is verified experimentally on a collection of wheeled mobile robots whose nominal controllers are explicitly designed to make the robots collide.", "In this paper we present a reformulation--framed as a constrained optimization problem--of multi-robot tasks which are encoded through a cost function that is to be minimized. The advantages of this approach are multiple. The constraint-based formulation provides a natural way of enabling long-term robot autonomy applications, where resilience and adaptability to changing environmental conditions are essential. Moreover, under certain assumptions on the cost function, the resulting controller is guaranteed to be decentralized. Furthermore, finite-time convergence can be achieved, while using local information only, and therefore preserving the decentralized nature of the algorithm. The developed control framework has been tested on a team of ground mobile robots implementing long-term environmental monitoring.", "We introduce Exponential Control Barrier Functions as means to enforce strict state-dependent high relative degree safety constraints for nonlinear systems. We also develop a systematic design method that enables creating the Exponential CBFs for nonlinear systems making use of tools from linear control theory. The proposed control design is numerically validated on a relative degree 6 linear system (the serial cart-spring system) and on a relative degree 4 nonlinear system (the two-link pendulum with elastic actuators.)", "An important tool for proving the safety of dynamical systems is the notion of a barrier certificate. In this paper, we prove that every robustly safe ordinary differential equation has a barrier certificate. Moreover, we show a construction of such a barrier certificate based on a set of states that is reachable in finite time.", "", "As multi-agent systems become more wide-spread and versatile, the ability to satisfy multiple system-level constraints grows increasingly important. In applications ranging from automated cruise control to safety in robot swarms, barrier functions have emerged as a tool to provably meet such constraints by guaranteeing forward invariance of desirable sets. However, satisfying multiple constraints typically implies formulating multiple barrier functions, which would be ameliorated if the barrier functions could be composed together as Boolean logic formulas. The use of max and min operators, which yields nonsmooth functions, represents one path to accomplish Boolean compositions of barrier functions, and this letter extends previously established concepts for barrier functions to a class of nonsmooth barrier functions that operate on systems described by differential inclusions. We validate our results by deploying Boolean compositions of nonsmooth barrier functions onto a team of mobile robots.", "This paper presents a safe learning framework that employs an adaptive model learning method together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique, and the resulting model will be used in combination with control barrier certificates which constrain policies (feedback controllers) in order to maintain safety, which refers to avoiding certain regions of the state space. Under certain conditions, recovery of safety in the sense of Lyapunov stability after violations of safety due to the nonstationarity is guaranteed. In addition, we reformulate action-value function approximation to make any kernel-based nonlinear function estimation method applicable to our adaptive learning framework. Lastly, solutions to the barrier-certified policy optimization are guaranteed to be globally optimal, ensuring greedy policy updates under mild conditions. The resulting framework is validated via simulations of a quadrotor, which has been used in the safe learnings literature under stationarity assumption, and then tested on a real robot called brushbot , whose dynamics is unknown, highly complex, and most probably nonstationary.", "This technical note shows that a barrier certificate exists for any safe dynamical system. Specifically, we prove converse barrier certificate theorems for a class of structurally stable dynamical systems. Other authors have developed a related result by assuming that the dynamical system has neither singular points nor closed orbits. In this technical note, we redefine the standard notion of safety to comply with dynamical systems with multiple singular elements. Hereafter, we prove the converse barrier certificate theorems and highlight the differences between our results and previous work by a number of illustrative examples.", "", "Abstract Barrier functions (also called certificates) have been an important tool for the verification of hybrid systems, and have also played important roles in optimization and multi-objective control. The extension of a barrier function to a controlled system results in a control barrier function. This can be thought of as being analogous to how Sontag extended Lyapunov functions to control Lypaunov functions in order to enable controller synthesis for stabilization tasks. A control barrier function enables controller synthesis for safety requirements specified by forward invariance of a set using a Lyapunov-like condition. This paper develops several important extensions to the notion of a control barrier function. The first involves robustness under perturbations to the vector field defining the system. Input-to-State stability conditions are given that provide for forward invariance, when disturbances are present, of a “relaxation” of set rendered invariant without disturbances. A control barrier function can be combined with a control Lyapunov function in a quadratic program to achieve a control objective subject to safety guarantees. The second result of the paper gives conditions for the control law obtained by solving the quadratic program to be Lipschitz continuous and therefore to gives rise to well-defined solutions of the resulting closed-loop system.", "Safety Barrier Certificates that ensure collision-free maneuvers for teams of differential flatness-based quadrotors are presented in this paper. Synthesized with control barrier functions, the certificates are used to modify the nominal trajectory in a minimally invasive way to avoid collisions. The proposed collision avoidance strategy complements existing flight control and planning algorithms by providing trajectory modifications with provable safety guarantees. The effectiveness of this strategy is supported both by the theoretical results and experimental validation on a team of five quadrotors.", "Safety critical systems involve the tight coupling between potentially conflicting control objectives and safety constraints. As a means of creating a formal framework for controlling systems of this form, and with a view toward automotive applications, this paper develops a methodology that allows safety conditions—expressed as control barrier functions —to be unified with performance objectives—expressed as control Lyapunov functions—in the context of real-time optimization-based controllers. Safety conditions are specified in terms of forward invariance of a set, and are verified via two novel generalizations of barrier functions; in each case, the existence of a barrier function satisfying Lyapunov-like conditions implies forward invariance of the set, and the relationship between these two classes of barrier functions is characterized. In addition, each of these formulations yields a notion of control barrier function (CBF), providing inequality constraints in the control input that, when satisfied, again imply forward invariance of the set. Through these constructions, CBFs can naturally be unified with control Lyapunov functions (CLFs) in the context of a quadratic program (QP); this allows for the achievement of control objectives (represented by CLFs) subject to conditions on the admissible states of the system (represented by CBFs). The mediation of safety and performance through a QP is demonstrated on adaptive cruise control and lane keeping, two automotive control problems that present both safety and performance considerations coupled with actuator bounds.", "In this letter, we consider the problem of rendering robotic tasks persistent by ensuring that the robots' energy levels are never depleted, which means that the tasks can be executed over long time horizons. This process is referred to as the persistification of the task. In particular, the state of each robot is augmented with its battery level so that the desired persistent behavior can be encoded as the forward invariance of a set such that the robots never deplete their batteries. Control barrier functions are employed to synthesize controllers that ensure that this set is forward invariant and, therefore, that the robotic task is persistent. As an application, this letter considers the persistification of a robotic sensor coverage task in which a group of robots has to cover an area of interest. The successful persistification of the coverage task is shown in simulation and on a team of mobile robots.", "Abstract This paper presents a new safety feedback design for nonlinear systems based on barrier certificates and the idea of control Lyapunov functions. In contrast to existing methods, this approach ensures safety independently of abstract high-level tasks that might be unknown or change over time. Leaving as much freedom as possible to the safe system, the authors believe that the flexibility of this approach is very promising. The design is validated using an illustrative example." ], "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_15", "@cite_41", "@cite_29", "@cite_42", "@cite_1", "@cite_32", "@cite_21", "@cite_43", "@cite_45", "@cite_49", "@cite_46", "@cite_12", "@cite_17" ], "mid": [ "2963995490", "", "2588802774", "2899936862", "2489231587", "2963249498", "", "2620840602", "2889711700", "1749724378", "2735010720", "2468433498", "2591921113", "2560504659", "2781971957", "1972149633" ] }
Constraint Learning for Control Tasks with Limited Duration Barrier Functions
Acquiring an optimal policy that attains the maximum return over some time horizon is of primary interest in the literature of both reinforcement learning [1][2][3] and optimal control [4]. A large number of algorithms have been designed to successfully control systems with complex dynamics to accomplish specific tasks, such as balancing an inverted pendulum and letting a humanoid robot run to a target location. Those algorithms may result in control strategies that are energy-efficient, take the shortest path to the goal, spend less time to accomplish the task, and sometimes outperform human beings in these senses (cf. [5]). As we can observe in the daily life, on the other hand, it is often difficult to attribute optimality to human M. Ohnishi is with the the RIKEN Center for Advanced Intelligence Project, Tokyo, Japan, and with the Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA (email: mohnishi@cs.washington.edu). G. Notomista is with the School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA (e-mail: g.notomista@gatech.edu). M. Sugiyama is with the RIKEN Center for Advanced Intelligence Project, Tokyo, Japan, and with the Department of Complexity Science and Engineering, the University of Tokyo (e-mail: sugi@k.u-tokyo.ac.jp). M. Egerstedt is with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA (e-mail: mag-nus@gatech.edu). behaviors, e.g., the behaviors are hardly the most efficient for any specific task (cf. [6]). Instead, humans are capable of generalizing the behaviors acquired through completing a certain task to deal with unseen situations. This fact casts a question of how one should design a learning algorithm that generalizes across tasks rather than focuses on a specific one. In this paper, we hypothesize that this can be achieved by letting the agents acquire a set of good enough policies when completing one task instead of finding a single optimal policy, and reuse this set for another task. Specifically, we consider safety, which refers to avoiding certain states, as useful information shared among different tasks, and we regard limited-duration safe policies as good enough policies. Our work is built on the idea of constraints-driven control [7,8], a methodology for controlling agents by telling them to satisfy constraints without specifying a single optimal path. If feasibility of the assigned constraints is guaranteed, this methodology avoids recomputing an optimal path when a new task is given but instead enables high-level compositions of constraints. However, state constraints are not always feasible and arbitrary compositions of constraints cannot be validated in general [9]. We tackle this feasibility issue by relaxing safety (i.e., forward invariance [10] of the set of safe states) to limited-duration safety, by which we mean satisfaction of safety over some finite time horizon T > 0 (see Figure I.1). For an agent starting from a certain subset of the safe region, one can always find a set of policies that render this agent safe up to some finite time. To guarantee limited-duration safety, we propose a limited duration control barrier function (LDCBF). The idea is based on local model-based control that constrains the instantaneous control input every time to restrict the growths of values of LDCBFs by solving a computationally inexpensive quadratic programming (QP) problem. To find an LDCBF, we make use of so-called global value function learning. More specifically, we assign a high cost to unsafe states and a lower cost to safe states, and learn the value function (or discounted infinite-horizon cost) associated with any given policy. Then, it is shown that the value function associated with any given policy is an LDCBF, i.e., a nonempty set of limited-duration safe policies can be obtained (Section IV-B). Contrary to the optimal control and Lyapunovbased approaches that only single out an optimal policy, our learning framework aims at learning a common set of policies that can be shared among different tasks. Thus, our framework can be contextualized within the so-called lifelong learning [11] and transfer learning [12] (or safe transfer; see Section V-B). The rest of this paper is organized as follows: Section II discusses the related work, including constraints-driven control and transfer learning, Section III presents notations, assumptions made in this paper, and some background knowledge. Subsequently, we present our main contributions and their applications in Section IV and Section V, respectively. In Section VI, we first validate LDCBFs on an existing control problem (see Section II). Then, our learning framework is applied to the cart-pole simulation environment in DeepMind Control Suite [13]; safety is defined as keeping the pole from falling down, and we use an LDCBF obtained after learning the balance task to facilitate learning the new task, namely, moving the cart without letting the pole fall down. III. PRELIMINARIES Throughout, R, R ≥0 and Z + are the sets of real numbers, nonnegative real numbers and positive integers, respectively. Let · R d := x, x R d be the norm induced by the inner product x, y R d := x T y for d-dimensional real vectors x, y ∈ R d , where (·) T stands for transposition. In this paper, we consider an agent with system dynamics described by an ordinary differential equation: dx dt = h(x(t), u(t)),(1) where x(t) ∈ R nx and u(t) ∈ U ⊂ R nu are the state and the instantaneous control input of dimensions n x , n u ∈ Z + , and h : R nx × U → R nx . Let X be the state space which is a compact subset of R nx . 1 In this work, we make the following assumptions. Assumption III.1. For any locally Lipschitz continuous policy φ : X → U, h is locally Lipschitz with respect to x. Assumption III.2. The control space U(⊂ R nu ) is a polyhedron. Given a policy φ : X → U and a discount factor β > 0, define the value function associated with the policy φ by V φ,β (x) := ∞ 0 e −βt (x(t))dt, β > 0, where (x(t)) is the immediate cost and x(t) is the trajectory starting from x(0) = x. When V φ,β is continuously differentiable over int(X ), namely, the interior of X , we obtain the Hamilton-Jacobi-Bellman (HJB) equation [44]: βV φ,β (x) = ∂V φ,β (x) ∂x h(x, φ(x)) + (x), x ∈ int(X ).(2) Now, if the immediate cost (x) is positive for all x ∈ X except for the equilibrium, and that a zero-cost state is globally asymptotically stable with the given policy φ : X → U, one can expect that the value function V φ,0 (x) := ∞ 0 (x(t))dt has finite values over X . In this case, the HJB equation becomesV φ,0 (x(t)) := dV φ,0 dt (x(t)) = − (x(t)), i.e., V φ,0 is decreasing over time. As such, it is straightforward to see that V φ,0 is a control Lyapunov function, i.e., there always exists a policy that satisfies the decrease condition for V φ,0 . However, there are two major limitations to this approach: i) one must assume that the agent stabilizes in a zero-cost state by the given policy φ, and ii) forward invariant sublevel sets of the control Lyapunov function usually become too conservative with respect to the given safe states. 2 To remedy these drawbacks, we present our major contribution in the next section. IV. CONSTRAINT LEARNING FOR CONTROL TASKS In this section, we propose limited duration control barrier functions (LDCBFs), and present their properties and a practical way to find an LDCBF. A. Limited Duration Control Barrier Functions Before formally presenting LDCBFs, we give the following definition. Definition IV.1 (Limited-duration safety). Given an open set of safe states C, let B T LD be a closed nonempty subset of C ⊂ X . The dynamical system (1) is said to be safe up to time T , if there exists a policy φ that ensures x(t) ∈ C for all 0 ≤ t < T whenever x(0) ∈ B T LD . Now, we give the definition of the LDCBFs. Definition IV.2. Let a function B LD : X → R ≥0 be continuously differentiable. Suppose that h(x, u) = f (x) + g(x) u, x ∈ X , u ∈ U, and that the set of safe states is given by C := x ∈ X : B LD (x) < L β , L > 0, β > 0.(3) Define the set B T LD = x ∈ X : B LD (x) ≤ Le −βT β ⊂ C, for some T > 0. Define also L f and L g as the Lie derivatives along f and g. Then, B LD is called a limited duration control barrier function for C and for the time horizon T if B T LD is nonempty and if there exists a monotonically increasing locally Lipschitz continuous function 3 α : R → R such that α(0) = 0 and inf u∈U {L f B LD (x) + L g B LD (x)u} ≤ α Le −βT β − B LD (x) + βB LD (x),(4) for all x ∈ C. Given an LDCBF, the admissible control space S T LD (x), x ∈ C, can be defined as S T LD (x) := {u ∈ U : L f B LD (x) + L g B LD (x)u ≤ α Le −βT β − B LD (x) + βB LD (x)}.(5) Given an LDCBF, safety up to time T is guaranteed if the initial state is taken in B T LD and an admissible control is employed as the following theorem claims. Theorem IV.1. Given a set of safe states C defined by (3) and an LDCBF B LD defined on X under Assumption III.1, any locally Lipschitz continuous policy φ : X → U that satisfies φ(x) ∈ S T LD (x), ∀x ∈ C, renders the dynamical system (1) safe up to time T whenever the initial state is in B T LD . Proof. See Appendix A. When h(x, u) = f (x) + g(x)u, x ∈ X , u ∈ U, one can constrain the control input within the admissible control space S T LD (x), x ∈ C, using a locally accurate model via QPs in the same manner as control barrier functions and control Lyapunov functions. Here, we present a general form of control syntheses via QPs. Proposition IV.1. Suppose that h(x, u) = f (x) + g(x)u, x ∈ X , u ∈ U. Given an LDCBF B LD with a locally Lipschitz derivative and the admissible control space S T LD (x * ) at x * ∈ C defined in (5), consider the QP: φ(x * ) = argmin u∈S T LD (x) u T H(x * )u + 2b(x * ) T u,(6) where H and b are Lipschitz continuous at x * ∈ C, and H(x * ) = H T (x * ) is positive definite. If the width 4 of a feasible set is strictly larger than zero, then, under Assumption III.2, the policy φ(x) defined in (6) is unique and Lipschitz continuous with respect to the state at x * . Proof. Slight modifications of [45, Theorem 1] proves the proposition. To see an advantage of considering LDCBFs, we show that an LDCBF can be obtained systematically as described next. B. Finding a Limited Duration Control Barrier Function We present a possible way to find an LDCBF B LD for the set of safe states through global value function learning. Let (x) ≥ 0, ∀x ∈ X , and suppose that the set of safe states is given by C := {x ∈ X : (x) < L} , L > 0.(7) Given the dynamical system defined in Definition IV.2, consider the virtual systeṁ x(t) = f (x(t)) + g(x(t))φ(x(t)) if x(t) ∈ C otherwise 0,(8) for a policy φ. Assume that we employ a continuously differentiable function approximator to approximate the value function V φ,β for the virtual system, and letV φ,β denote an approximation of V φ,β . By using the HJB equation (2), define the estimated immediate cost functionˆ aŝ (x) = βV φ,β (x) − L fV φ,β (x)−L gV φ,β (x)φ(x), ∀x ∈ C. Select c ≥ 0 so thatˆ c (x) :=ˆ (x) + c ≥ 0 for all x ∈ C, and define the functionV φ,β c (x) :=V φ,β (x) + c β . Then, the following theorem holds. Theorem IV.2. Consider the set B T LD = x ∈ X :V φ,β c (x) ≤L e −βT β , whereL := inf y ∈ X \ C βV φ,β c (y). IfB T LD is nonempty, then the dynamical system starting from the initial state inB T LD is safe up to time T when the policy φ is employed, andV φ,β c (x) is an LDCBF for the set C := x ∈ X :V φ,β c (x) <L β ⊂ C. Proof. See Appendix C. Remark IV.1. We need to considerV φ,β c instead of V φ,β because the immediate cost function and the virtual system (8) need to be sufficiently smooth to guarantee that V φ,β is continuously differentiable. In practice, the choice of c andL affects conservativeness of the set of safe states. Note, to enlarge the set B T LD , the immediate cost (x) is preferred to be close to zero for x ∈ C, and L needs to be sufficiently large. Example IV.1. As an example of finding an LDCBF, we use a deep neural network. Suppose the discrete-time transition is given by (x n , u n , (x n ), x n+1 ), where n ∈ 0 ∪ Z + is the time instant. Then, by executing a given policy φ, we store the negative data, where x n+1 / ∈ C, and the positive data, where x n+1 ∈ C, separately, and conduct prioritized experience replay [46,47]. Specifically, initialize a target networkV φ,β Target and a local networkV φ,β Local , and update the local network by sampling a random minibatch of N negative and positive transitions {(x ni , u ni , (x ni ), x ni+1 )} i∈{1,2,...,N } to minimize 1 N N i=1 (y ni −V φ,β Local (x ni )), where y ni = (x ni ) + γV φ,β Target (x ni+1 ) x ni+1 ∈ C, L 1−γ x ni+1 / ∈ C. Here, γ ≈ − log (β)/∆ t is a discount factor for a discretetime case, where ∆ t is the time interval of one time step. The target network is soft-updated using the local network byV φ,β Target ← µV φ,β Local + (1 − µ)V φ,β Target for µ 1. One can transform the learned local network to a continuous-time form via multiplying it by ∆ t . Although we cannot ensure forward invariance of the set C using LDCBFs, the proposed approach is still set theoretic. As such, we can consider the compositions of LDCBFs. C. Compositions of Limited Duration Barrier Functions The Boolean compositional CBFs were studied in [23,48]. In [23], max and min operators were used for the Boolean operations, and nonsmooth barrier functions were proposed out of necessity. However, it is known that, even if two sets C 1 ⊂ X and C 2 ⊂ X are controlled invariant [49, page 21] for the dynamical system (1), the set C 1 ∩ C 2 is not necessarily controlled invariant, while C 1 ∪C 2 is indeed controlled invariant [49, Proposition 4.13]. We can make a similar assertion for limited-duration safety as follows. Proposition IV.2. Assume that there exists a limited-duration safe policy for each set of safe states C j ⊂ X , j ∈ {1, 2, . . . , J}, J ∈ Z + , that renders an agent with the dynamical system (1) safe up to time T whenever starting from inside a closed nonempty set B LDj ⊂ C j . Then, given the set of safe states C := J j=1 C j , there also exists a policy rendering the dynamical system safe up to time T whenever starting from any state in B LD := J j=1 B LDj . Proof. A limited-duration safe policy for C j also keeps the agent inside the set C up to time T when starting from inside B LDj . If there exist LDCBFs for C j s, it is natural to ask if there exists an LDCBF for C. Because of the nonsmoothness stemming from Boolean compositions, however, obtaining an LDCBF for C requires an additional learning in general (see Appendix F). Also, existence of an LDCBF for the intersection of multiple sets of safe states is not guaranteed, and we need an additional learning as well. So far, we have seen a possible way to obtain an LDCBF for a given set of safe states expressed as in (7). As our approach is set-theoretic rather than specifying a single optimal policy, it is also compatible with the constraints-driven control and transfer learning as described in the next section. V. APPLICATIONS In this section, we present two practical examples that illustrate benefits of considering LDCBFs, namely, long-duration autonomy and transfer learning. A. Applications to Long-duration Autonomy In many applications, guaranteeing particular properties (e.g., forward invariance) over an infinite-time horizon is difficult or some forms of approximations are required. Specifically, when a function approximator is employed, there will certainly be an approximation error. Nevertheless, it is often sufficient to guarantee safety up to certain finite time, and our proposed LDCBFs act as useful relaxations of CBFs. To see that one can still achieve long-duration autonomy by using LDCBFs, we consider the settings of work in [27]. Suppose that the state x := [E, p T ] T ∈ R 3 has the information of energy level E ∈ R ≥0 and the position p ∈ R 2 of an agent. Suppose also that E max > 0 is the maximum energy level and ρ(p) ≥ 0 (equality holds only when the agent is at a charging station) is the energy required to bring the agent to a charging station from the position p ∈ R 2 . Then, although we emphasize that we can obtain an LDCBF by value function learning if necessary, let us assume that the function B LD (x) := E max − E + ρ(p) ≥ 0 is an LDCBF, for simplicity. Then, by letting L = β(E max − E min ) for some β > 0 and for the minimum necessary energy level E min , 0 ≤ E min < E max , the set of safe states can be given by C := x ∈ X : B LD (x) < L β . Now, under these settings, the following proposition holds. Proposition V.1. Assume that the energy dynamics is lower bounded by dE dt ≥ −K d , ∃K d > 0, which implies that the least exit timeT energy (E) of E being below E min iŝ T energy (E) = (E − E min ) K d . Also, suppose we employ a locally Lipschitz continuous policy φ that satisfies the LDCBF condition using a battery dynamics model dÊ dt = −K d . Then, by taking T >T energy (E 0 ) for the initial energy level E 0 > E min and letting B T LD := x ∈ X : B LD (x) ≤ Le −βT β ⊂ C, the agent starting from a state in B T LD will reach the charging station before the energy reaches to E min . Proof. See Appendix D. Hence, LDCBFs are shown to be applicable to some cases where it is difficult to guarantee that certain properties hold over infinite horizons, but where limited-duration safety suffices. One of the benefits of using LDCBFs is that, once a set of limited-duration safe policies or good enough policies is obtained, one can reuse them for different tasks. Therefore, given that we can obtain an LDCBF through global value function learning for a policy that is not necessarily stabilizing the system, it is natural to ask if one can employ LDCBFs to transfer knowledge. Indeed, LDCBFs also have good compatibility with transfer learning (or safe transfer) as discussed below. B. Applications to Transfer Learning Given the immediate cost function, reinforcement learning aims at finding an optimal policy. Clearly, the obtained policy is not optimal with respect to a different immediate cost function in general. Therefore, employing the obtained policy straightforwardly in different tasks makes no sense. However, it is quite likely that the obtained policy is good enough even in different tasks because some mandatory constraints such as avoiding unsafe regions of the state space are usually shared among different tasks. Therefore we wish to exploit constraint learning for the sake of transfer learning, i.e., we learn constraints which are common to the target task while learning source tasks. Definition V.1 (Transfer learning, [12, modified version of Definition 1]). Given a set of training data D S for one task (i.e., source task) denoted by T S (e.g., an MDP) and a set of training data D T for another task (i.e., target task) denoted by T T , transfer learning aims to improve the learning of the target predictive function f T (i.e., a policy in our example) in D T using the knowledge in D S and T S , where D S = D T , or T S = T T . For example, when learning an optimal policy for the balance task of the cart-pole problem, one can simultaneously learn a set of limited-duration safe policies that keep the pole from falling down up to certain time T > 0. The set of these limited-duration safe policies is obviously useful for other tasks such as moving the cart to one direction without letting the pole fall down. Here, we present a possible application of limited duration control barrier functions to transfer learning. We take the following steps: 1) Design J ∈ Z + cost functions j , j ∈ {1, 2, . . . , J}, each of which represents a constraint by defining a set of safe state C j := {x ∈ X : j (x) < L} , L > 0, j ∈ {1, 2, . . . , J}. 2) Conduct reinforcement learning for a cost function by using any of the reinforcement learning techniques. 3) No matter the currently obtained policy φ is optimal or not, one can obtain an LDCBF B LDj for each set C j , j ∈ {1, 2, . . . , J}. More specifically, B LDj is given by B LDj (x) := V φ,β j (x) := ∞ 0 e −βt j (x(t))dt, j ∈ {1, 2, . . . , J}, for x(0) = x. 4) When learning a new task, policies are constrained by LDCBFs depending on which constraints are common to the new task. We study some practical implementations. Given LDCBFs B LDj , j ∈ {1, 2, ..., J}, define the set Φ T j of admissible policies as Φ T j := {φ :φ(x) ∈ S T LDj (x), ∀x ∈ C j } ⊂ Φ := {φ : φ(x) ∈ U, ∀x ∈ X }, where S T LDj (x) is the set of admissible control inputs at x for the jth constraint. If an optimal policy φ T T for the target task T T is included in Φ T j , one can conduct learning for the target task within the policy space Φ T j . If not, one can still consider Φ T j as a soft constraint and can explore the policy space Φ \ Φ T j with a given probability or can just select the initial policy from Φ T j . In practice, a parametrized policy is usually considered; a policy φ θ expressed by a parameter θ is updated via policy gradient methods [50]. If the policy is in the linear form with a fixed feature vector, the projected policy gradient method [51] can be used. Thanks to the fact that an LDCBF defines an affine constraint on instantaneous control inputs if the system dynamics is affine in control, the projected policy gradient method looks like θ ← Γ j [θ + λ∇ θ F T T (θ)]. Here, Γ j : Φ → Φ T j projects a policy onto the affine constraint defined by the jth constraint and F T T (θ) is the objective function for the target task which is to be maximized. For the policy not in the linear form, one may update policies based on LDCBFs by modifying the deep deterministic policy gradient (DDPG) method [52]: because through LDCBFs, the global property (i.e., limited-duration safety) is ensured by constraining local control inputs, it suffices to add penalty terms to the cost when updating a policy using samples. For example, one may employ the log-barrier extension proposed in [53], which is a smooth approximation of the hard indicator function for inequality constraints but is not restricted to feasible points. VI. EXPERIMENTS In this section, we validate our learning framework. First, we show that LDCBFs indeed work for the constraints-driven control problem considered in Section V-A by simulation. Then, we apply LDCBFs to a transfer learning problem for the cart-pole simulation environment. A. Constraints-driven coverage control of multi-agent systems Let the parameters be E max = 1.0, E min = 0.55, K d = 0.01, β = 0.005 and T = 50.0 > 45.0 = (E max − E min )/K d . We consider six agents (robots) with single integrator dynamics. An agent of the position p i := [x i , y i ] T is assigned a charging station of the position p charge,i , where x and y are the X position and the Y position, respectively. When the agent is close to the station (i.e., p i − p charge,i R 2 ≤ 0.05), it remains there until the battery is charged to E ch = 0.92. Actual battery dynamics is given by dE/dt = −0.01E. The coverage control task is encoded as Lloyd's algorithm [54] aiming at converging to the Centroidal Voronoi Tesselation, but with a soft margin so that the agent prioritizes the safety constraint. The locational cost used for the coverage control task is given as follows [55]: MATLAB simulation (the simulator is provided on the Robotarium [56] website: www.robotarium.org), we used the random seed rng(5) for determining the initial states. Note, for every agent, the energy level and the position are set so that it starts from inside the set B T LD . Figure VI.1 shows (a) the images of six agents executing coverage tasks and (b) images of the agents three of which are charging their batteries. Figure VI-B shows the simulated battery voltage data of the six agents, from which we can observe that LDCBFs worked effectively for the swarm of agents to avoid depleting their batteries. 6 i=1 Vi(p) p i −p 2 ϕ(p)dp, where V i (p) = {p ∈ X : p i −p ≤ p j −p , ∀j = i} is the Voronoi cell for the agent i. In particular, we used ϕ([x,ŷ] T ) = e −{(x−0.2) 2 +(ŷ−0.3) 2 }/0.06 + 0.5e −{(x+0.2) 2 +(ŷ+0.1) 2 }/0.03 . In B. Transfer from Balance to Move: Cart-pole problem Next, we apply LDCBFs to transfer learning. The simulation environment and the deep learning framework used in this experiment are "Cart-pole" in DeepMind Control Suite and PyTorch [57], respectively. We take the following steps: 1) Learn a policy that successfully balances the pole by using DDPG [52]. 2) Learn an LDCBF by using the obtained actor network. 3) Try a random policy with the learned LDCBF and a (locally) accurate model to see that LDCBF works reasonably. 4) With and without the learned LDCBF, learn a policy that moves the cart to left without letting the pole fall down, which we refer to as move-the-pole task. The parameters used for this experiment are summarized in Table VI.1. Here, angle threshold stands for the threshold of cos ψ where ψ is the angle of the pole from the standing position, and position threshold is the threshold of the cart position p. The angle threshold and the position threshold are used to terminate an episode. Note that the cart-pole environment of MuJoCo [58] xml data in DeepMind Control Suite is modified so that the cart can move between −3.8 and 3.8. As presented in Example IV.1, we use prioritized experience replay when learning an LDCBF. Specifically, we store the positive and the negative data, and sample 4 data points from the positive one and the remaining 60 data points from the negative one. In this experiment, actor, critic and LDCBF networks use ReLU nonlinearities. The actor network and the LDCBF network consist of two layers of 300 → 200 units, and the critic network is of two layers of 400 → 300 units. The control input vector is concatenated to the state vector from the second critic layer. Step1: The average duration (i.e., the first exit time, namely, the time when the pole first falls down) out of 10 seconds (corresponding to 1000 time steps) over 10 trials for the policy learned through the balance task by DDPG was 10 seconds. Step2: Then, by using this successfully learned policy, an LDCBF is learned by assigning the cost (x) = 1.0 for cos ψ < 0.2 and (x) = 0.1 elsewhere. Also, because the LDCBF is learned in a discrete-time form, we transform it to a continuous-time form via multiplying it by ∆ t = 0.01. When learning an LDCBF, we initialize each episode as follows: the angle ψ is uniformly sampled within −1.5 ≤ ψ ≤ 1.5, the cart velocityṗ is multiplied by 100 and the angular velocityψ is multiplied by 200 after being initialized by DeepMind Control Suite. The LDCBF learned by using this policy is illustrated in Figure VI Step3: To test this LDCBF, we use a uniformly random policy (φ(x) takes the value between −1 and 1) constrained by the LDCBF with the function α(q) = max {0.1q, 0} and with the time constant T = 5.0. When imposing constraints, we use the (locally accurate) control-affine model of the cart-pole in the work [59], where we replace the friction parameters by zeros for simplicity. The average duration out of 10 seconds over 10 trials for this random policy was 10 seconds, which indicates that the LDCBF worked sufficiently well. We also tried this LDCBF with the function α(q) = max {3.0q, 0} and T = 5.0, which resulted in the average duration of 5.58 seconds. Moreover, we tried the fixed policy φ(x) = 1.0, ∀x ∈ X , with the function α(q) = max {0.1q, 0} and T = 5.0, and the average duration was 4.73 seconds, which was sufficiently close to T = 5.0. Step4: For the move-the-pole task, we define the success by the situation where the cart position p, −3.8 ≤ p ≤ 3.8, ends up in the region of p ≤ −1.8 without letting the pole fall down. The angle ψ is uniformly sampled within −0.5 ≤ ψ ≤ 0.5 and the rest follow the initialization of DeepMind Control Suite. The reward is given by (1 + cos ψ)/2×(utils.rewards.tolerance(ṗ + 1.0, bounds = (−2.0, 0.0), margin = 0.5)), where utils.rewards.tolerance is the function defined in [13]. In other words, to move the pole to left, we give high rewards when the cart velocity is negative and the pole is standing up. To use the learned LDCBF for DDPG, we store matrices and vectors used in linear constraints along with other variables such as control inputs and states, which are then used for experience replay. Then, the logbarrier extension cost proposed in [53] is added when updating policies. Also, we try DDPG without using LDCBF for the move-the-pole task. Both approaches initialize the policy by the one obtained after the balance task. The average success rates of the policies obtained after the numbers of episodes up to 15 over 10 trials are given in Table VI.2 for DDPG with the learned LDCBF and DDPG without LDCBF. This result implies that our proposed approach successfully transferred information from the source task to the target task by sharing a common safety constraint. VII. CONCLUSION In this paper, we presented a notion of limited-duration safety as a relaxation of forward-invariance of a set of safe states. Then, we proposed limited-duration control barrier functions that are used to guarantee limited-duration safety by using locally accurate model of agent dynamics. We showed that LDCBFs can be obtained through global value function learning, and analyzed some of their properties. LDCBFs were validated through persistent coverage control tasks and were successfully applied to a transfer learning problem via sharing a common state constraint. APPENDIX A PROOF OF THEOREM IV.1 Under Assumption III.1, the trajectories x(t) with an initial condition x(0) ∈ B T LD exist and are unique over t ≥ 0. Let the first time at which the trajectory x(t) exits C be T e > 0 and let T p , 0 ≤ T p < T e , denote the last time at which the trajectory x(t) passes through the boundary of B T LD from inside before first exiting C. Since α is locally Lipschitz continuous and B LD is continuously differentiable, the right hand side of (4) is locally Lipschitz continuous. Thus solutions to the differential equatioṅ r(t) = α Le −βT β − r(t) + βr(t), where the initial condition is given by r(T p ) = B LD (x(T p )), exist and are unique for all t, T p ≤ t ≤ T e . On the other hand, the solution tȯ s(t) = βs(t), where the initial condition is given by s( T p ) = B LD (x(T p )) = Le −βT β , is s(t) = B LD (x(T p ))e β(t−Tp) , ∀t ≥ T p . It thus follows that s(T p + T ) = L β e −βT e βT = L β , and T p +T is the first time at which the trajectory s(t), t ≥ T p , exits C. Because α( Le −βT β − r(t)) ≤ 0, T p ≤ t ≤ T e , we obtain, by the Comparison Lemma [10], [60,Theorem 1.10.2], B LD (x(t)) ≤ r(t) ≤ s(t) for all t, T p ≤ t ≤ T e . If we assume T e < T p +T , it contradicts the fact that B LD (x(T e )) ≤ s(T e ) < s(T p + T ) = L β , and hence T e ≥ T p + T . Therefore, we conclude that any Lipschitz continuous policy φ : X → U such that φ(x) ∈ S T LD (x), ∀x ∈ C, renders the dynamical system safe up to time T p + T (≥ T ) whenever the initial state x(0) is in B T LD . APPENDIX B ON PROPOSITION IV.1 The width of a feasible set is defined as the unique solution to the following linear program: u ω (x) = max [u T ,ω] T ∈R nu+1 ω (B.1) s.t. L f B LD (x) + L g B LD (x)u + ω ≤ α Le −βT β − B LD (x) + βB LD (x) u + [ω, ω . . . , ω] T ∈ U APPENDIX C PROOF OF THEOREM IV.2 Because, by definition, V φ,β c (x) ≥L β , ∀x ∈ X \ C, it follows that C = x ∈ X :V φ,β c (x) <L β ⊂ C. Because the continuously differentiable functionV φ,β c (x) sat- isfiesV φ,β c (x) = L fV φ,β c (x) + L gV φ,β c (x)φ(x) = βV φ,β c (x) −ˆ c (x), ∀x ∈ C, andˆ c (x) ≥ 0, ∀x ∈ C, there exists at least one policy φ that satisfies φ(x) ∈ S T LD (x) = {u ∈ U : L fV φ,β c (x) + L gV φ,β c (x)u ≤ α L e −βT β −V φ,β c (x) + βV φ,β c (x)}, for all x ∈ C and for a monotonically increasing locally Lipschitz continuous function α such that α(q) = 0, ∀q ≤ 0. Therefore,V φ,β c is an LDCBF for the setĈ. Remark C.1. A sufficiently large constant c could be chosen in practice. Ifˆ c (x) > 0 for all x ∈ C and the value function is learned by using a policy φ such that φ(x)+[c φ , c φ . . . , c φ ] T ∈ U for some c φ > 0, then the unique solution to the linear program (B.1) satisfies u ω (x) > 0 for any x ∈ C. APPENDIX D PROOF OF PROPOSITION V.1 Under Assumption III.1, the trajectories x(t) with an initial condition x(0) ∈ B T LD satisfying the given LDCBF condition (that uses the battery dynamics model dÊ/dt = −K d ) exist and are unique over t ≥ 0. Let the actual exit time of E being below E min be T energy > 0, and let the first time at which the trajectory x(t) exits C be T e > 0 (T e = ∞ if the agent never exits C). Note it holds that T e ≤ T energy . Also, let E t and ρ t be the trajectories of E and ρ(p). Definê T e := min T e , inf {t : ρ t = 0 ∧ x(t) / ∈ B T LD } , and T p := max t : x(t) ∈ ∂B T LD ∧ t ≤T e , where ∂B T LD denotes the boundary of B T LD . Now, if E t = E min and ρ t > 0 for some t < T p , it contradicts the fact that B LD (x(t)) = E max − E t + ρ t < E max − E min (∵ x(t) ∈ C, ∀t < T p ) . Therefore, it follows that E t > E min or ρ t = 0 for all t < T p . This implies that we should only consider the time t ≥ T p by assuming that ρ Tp ≥ 0. LetÊ t be the trajectory following the virtual battery dynamics dÊ/dt = −K d andÊ Tp = E Tp , and let s(t) be the unique solution tȯ s(t) = βs(t), t ≥ T p , where s(T p ) = B LD (x(T p )) = (E max − E min )e −βT . Also, let (t) = s(t) +Ê t − E max , t ≥ T p . Then, the time at which s(t) becomes E max − E min is T p + T because s(T + T p ) = B LD (x(T p ))e β(T +Tp−Tp) = (E max − E min )e −βT e β(T +Tp−Tp) = E max − E min . Sincê T energy (E Tp ) ≤T energy (E 0 ) < T and (t) = B LD (x(T p ))e β(t−Tp) +Ê Tp − K d (t − T p ) − E max , we obtain T 0 := inf {t : (t) = 0 ∧ t ≥ T p } ≤ T p +T energy (E Tp ). On the other hand, the actual battery dynamics can be written as dE/dt = −K d +∆(x), where ∆(x) ≥ 0. Therefore, we see that the trajectory x(t) satisfies dB LD (x) dt ≤ βB LD (x) − ∆(x), ∀t, T p ≤ t ≤T e . Then, because d (B LD (x(t)) − s(t)) dt ≤ β (B LD (x(t)) − s(t)) − ∆(x(t)) ≤ β (B LD (x(t)) − s(t)) , ∀t, T p ≤ t ≤T e , and β (B LD (x(T p )) − s(T p )) = 0, we obtain B LD (x(t)) − s(t) ≤ − t 0 ∆(x(t))dt, ∀t, T p ≤ t ≤T e . Also, it is straightforward to see that T energy ≥ T e ≥ T p + T ≥ T p +T energy (E Tp ). From the definitions of B LD and (t), it follows that ρ t − (t) = B LD (x(T p )) − s(t) + E t −Ê t ≤ − t 0 ∆(x(t))dt + t 0 ∆(x(t))dt = 0, ∀t, T p ≤ t ≤T e , which leads to the inequality ρ t ≤ (t) for all t, T p ≤ t ≤T e . Hence, we conclude that T := inf {t : ρ t = 0 ∧ t ≥ T p } ≤T 0 ≤ T p +T energy (E Tp ) ≤ T energy , andT e =T , which proves the proposition. APPENDIX E STOCHASTIC LIMITED DURATION CONTROL BARRIER FUNCTIONS For the system dynamics described by a stochastic differential equation: dx = h(x(t), u(t))dt + η(x(t))dw, (E.1) where h : R nx × U → R nx is the drift, η : R nx → R nx×nw is the diffusion, w is a Brownian motion of dimension n w ∈ Z + , it is often impossible to guarantee safety with probability one without making specific assumptions on the dynamics. Therefore, we instead consider an upper bound on the probability that a trajectory escape from the set of safe states within a given finite time. We give the formal definition below. Definition E.1 (Limited-duration safety for stochastic systems). Let B T,δ SLD be a closed nonempty subset of a open set of safe states C S and τ the first exit time τ := inf{t : x(t) = C S }. Then, the stopped processx(t) defined bỹ x(t) := x(t), t < τ, x(τ ), t ≥ τ, (E.2) where x(t) evolves by (E.1), is safe up to time T > 0 with probability δ, 0 ≤ δ ≤ 1, if there exists a policy φ which ensures that P x(t) = C S for some t, 0 ≤ t ≤ T : x(0) ∈ B T,δ SLD ≤ 1 − δ. To present stochastic limited duration control barrier functions (SLDCBFs) that are stochastic counterparts of LDCBFs, we define the infinitesimal generator G, for a function B SLD : X → R ≥0 , by G(B SLD )(x) := − 1 2 tr ∂ 2 B SLD (x) ∂x 2 η(x)η(x) T − ∂B SLD (x) ∂x h(x, φ(x)), x ∈ int(X ), where tr stands for the trace. Also, we make the following assumption. Then, the following theorem holds. Theorem E.1. Given T > 0 and δ, 0 ≤ δ ≤ 1, define a set of safe states C S := x ∈ X : B SLD (x) < L β , L > 0, β > 0, for a twice continuously differentiable function B SLD : X → R ≥0 . Define also the set B T,δ SLD as B T,δ SLD := x ∈ X : B SLD (x) ≤ (1 − δ) Le −βT β ⊂ C S . If B T,δ SLD is nonempty and if there exists a Lipschitz continuous policy φ : X → U satisfying φ(x) ∈ S T SLD := {u ∈ U : −G(B SLD )(x) ≤ βB SLD (x)}, for all x ∈ C S , then, under Assumption E.1, the policy φ renders the stopped processx(t) in (E.2) safe up to time T with probability δ. Proof. DefineB SLD : R ≥0 → R ≥0 as B SLD (t) := e −βt B SLD (x(t)) . Becausex(t) is an E x B SLD (x(t)) −B SLD (x(0)) = E x    t 0 −e −βt [G(B SLD ) + βB SLD ] (x(s)) ≤0 ds    for any t, 0 ≤ t < ∞, from which it follows that E x B SLD (x(t)) ≤B SLD (x(0)). Therefore,B SLD is a supermartingale with respect to the filtration {M t : t ≥ 0} generated byx(t) because B SLD (x(t)) is twice continuously differentiable and C S is bounded. We thus obtain [62, p.25] P sup 0 ≤ t ≤ T B SLD (x(t)) ≥ L β ≤ P sup 0 ≤ t ≤ TB SLD (t) ≥ Le −βT β ≤ βe βTB SLD (0) L = βe βT B SLD (x(0)) L ≤ 1 − δ. To make a claim similar to Theorem IV.2, we consider the following value function associated with the policy φ: V φ,β (x) := E x ∞ 0 e −βt (x(t))dt, β ≥ 0, where (x(t)) is the immediate cost and E x is the expectation for all trajectories (time evolutions of x(t)) starting from x = x(0). When V φ,β is twice continuously differentiable over int(X ), we obtain the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation [44]: βV φ,β (x) = −G(V φ,β )(x) + (x), x ∈ int(X ). (E.3) Given the set of safe states C S := {x ∈ X : (x) < L} , L > 0, for (x) ≥ 0, ∀x ∈ X , and the stopped process (E.2), assume that we employ a twice continuously differentiable function approximator to approximate the value function V φ,β for the stopped process, and letV φ,β denote the approximation of V φ,β . By using the HJBI equation (E.3), define the estimated immediate cost functionˆ aŝ (x) = βV φ,β (x) + G(V φ,β )(x), ∀x ∈ C S . Select c ≥ 0 so thatˆ c (x) :=ˆ (x) + c ≥ 0 for all x ∈ C S , and define the functionV φ,β c (x) :=V φ,β (x) + c β . Now, the following theorem holds. The function B LD1 ∨B LD2 is, however, nonsmooth in general. Therefore, even if we consider differential inclusion and associated Carathéodory solutions as in [23,63], there might exist sliding modes that violate inequalities imposed by a function B LD1 ∨B LD2 at x ∈ Ω f g+φ . Here, Ω f +gφ represents the zeromeasure set where the dynamics is nondifferentiable (see [64] for detailed arguments, for example). Nevertheless, to obtain a smooth LDCBF for C, we may obtain a smooth approximation of a possibly discontinuous policy φ that satisfies dB LDj (x * ) dt ≤ α Le −βT β − B LDj (x * ) + βB LDj (x * ), for all x * such that B LD1 (x * ) < B LD2 (x * ) for j = 1 and B LD2 (x * ) < B LD1 (x * ) for j = 2. Then, we can conduct value function learning to obtain an LDCBF for C with a set B T LD that is possibly smaller than B T LD .
7,890
1908.09506
2969721933
When deploying autonomous agents in unstructured environments over sustained periods of time, adaptability and robustness oftentimes outweigh optimality as a primary consideration. In other words, safety and survivability constraints play a key role and in this paper, we present a novel, constraint-learning framework for control tasks built on the idea of constraints-driven control. However, since control policies that keep a dynamical agent within state constraints over infinite horizons are not always available, this work instead considers constraints that can be satisfied over a sufficiently long time horizon T > 0, which we refer to as limited-duration safety. Consequently, value function learning can be used as a tool to help us find limited-duration safe policies. We show that, in some applications, the existence of limited-duration safe policies is actually sufficient for long-duration autonomy. This idea is illustrated on a swarm of simulated robots that are tasked with covering a given area, but that sporadically need to abandon this task to charge batteries. We show how the battery-charging behavior naturally emerges as a result of the constraints. Additionally, using a cart-pole simulation environment, we show how a control policy can be efficiently transferred from the source task, balancing the pole, to the target task, moving the cart to one direction without letting the pole fall down.
Moreover, our work is also related to safe reinforcement learning, such as Lyapunov-based safe learning (cf. @cite_40 @cite_28 ) and constrained Markov decision processes (CMDPs) (cf. @cite_31 @cite_50 ). The former is based on the fact that sublevel sets of a control Lyapunov function are forward invariant, and considers stability as safety. The latter is aimed at selecting an optimal policy that satisfies constraints. Note these approaches are designed for one specific task. Our work, on the other hand, does not require stability, and can consider an arbitrarily shaped set of safe states.
{ "abstract": [ "Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend control-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.", "Lyapunov design methods are used widely in control engineering to design controllers that achieve qualitative objectives, such as stabilizing a system or maintaining a system's state in a desired operating range. We propose a method for constructing safe, reliable reinforcement learning agents based on Lyapunov design principles. In our approach, an agent learns to control a system by switching among a number of given, base-level controllers. These controllers are designed using Lyapunov domain knowledge so that any switching policy is safe and enjoys basic performance guarantees. Our approach thus ensures qualitatively satisfactory agent behavior for virtually any reinforcement learning algorithm and at all times, including while the agent is learning and taking exploratory actions. We demonstrate the process of designing safe agents for four different control problems. In simulation experiments, we find that our theoretically motivated designs also enjoy a number of practical benefits, including reasonable performance initially and throughout learning, and accelerated learning.", "In many real-world reinforcement learning (RL) problems, besides optimizing the main objective function, an agent must concurrently avoid violating a number of constraints. In particular, besides optimizing performance it is crucial to guarantee the of an agent during training as well as deployment (e.g. a robot should avoid taking actions - exploratory or not - which irrevocably harm its hardware). To incorporate safety in RL, we derive algorithms under the framework of Constrained Markov decision problems (CMDPs), an extension of the standard Markov decision problems (MDPs) augmented with constraints on expected cumulative costs. Our approach hinges on a novel method. We define and present a method for constructing Lyapunov functions, which provide an effective way to guarantee the global safety of a behavior policy during training via a set of local, linear constraints. Leveraging these theoretical underpinnings, we show how to use the Lyapunov approach to systematically transform dynamic programming (DP) and RL algorithms into their safe counterparts. To illustrate their effectiveness, we evaluate these algorithms in several CMDP planning and decision-making tasks on a safety benchmark domain. Our results show that our proposed method significantly outperforms existing baselines in balancing constraint satisfaction and performance.", "For many applications of reinforcement learning it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function. For example, systems that physically interact with or around humans should satisfy safety constraints. Recent advances in policy search algorithms (, 2016; , 2015; , 2016; , 2016) have enabled new capabilities in high-dimensional control, but do not consider the constrained setting. We propose Constrained Policy Optimization (CPO), the first general-purpose policy search algorithm for constrained reinforcement learning with guarantees for near-constraint satisfaction at each iteration. Our method allows us to train neural network policies for high-dimensional control while making guarantees about policy behavior all throughout training. Our guarantees are based on a new theoretical result, which is of independent interest: we prove a bound relating the expected returns of two policies to an average divergence between them. We demonstrate the effectiveness of our approach on simulated robot locomotion tasks where the agent must satisfy constraints motivated by safety." ], "cite_N": [ "@cite_28", "@cite_40", "@cite_31", "@cite_50" ], "mid": [ "2618318883", "2164479831", "2964340170", "2962803570" ] }
Constraint Learning for Control Tasks with Limited Duration Barrier Functions
Acquiring an optimal policy that attains the maximum return over some time horizon is of primary interest in the literature of both reinforcement learning [1][2][3] and optimal control [4]. A large number of algorithms have been designed to successfully control systems with complex dynamics to accomplish specific tasks, such as balancing an inverted pendulum and letting a humanoid robot run to a target location. Those algorithms may result in control strategies that are energy-efficient, take the shortest path to the goal, spend less time to accomplish the task, and sometimes outperform human beings in these senses (cf. [5]). As we can observe in the daily life, on the other hand, it is often difficult to attribute optimality to human M. Ohnishi is with the the RIKEN Center for Advanced Intelligence Project, Tokyo, Japan, and with the Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA (email: mohnishi@cs.washington.edu). G. Notomista is with the School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA (e-mail: g.notomista@gatech.edu). M. Sugiyama is with the RIKEN Center for Advanced Intelligence Project, Tokyo, Japan, and with the Department of Complexity Science and Engineering, the University of Tokyo (e-mail: sugi@k.u-tokyo.ac.jp). M. Egerstedt is with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA (e-mail: mag-nus@gatech.edu). behaviors, e.g., the behaviors are hardly the most efficient for any specific task (cf. [6]). Instead, humans are capable of generalizing the behaviors acquired through completing a certain task to deal with unseen situations. This fact casts a question of how one should design a learning algorithm that generalizes across tasks rather than focuses on a specific one. In this paper, we hypothesize that this can be achieved by letting the agents acquire a set of good enough policies when completing one task instead of finding a single optimal policy, and reuse this set for another task. Specifically, we consider safety, which refers to avoiding certain states, as useful information shared among different tasks, and we regard limited-duration safe policies as good enough policies. Our work is built on the idea of constraints-driven control [7,8], a methodology for controlling agents by telling them to satisfy constraints without specifying a single optimal path. If feasibility of the assigned constraints is guaranteed, this methodology avoids recomputing an optimal path when a new task is given but instead enables high-level compositions of constraints. However, state constraints are not always feasible and arbitrary compositions of constraints cannot be validated in general [9]. We tackle this feasibility issue by relaxing safety (i.e., forward invariance [10] of the set of safe states) to limited-duration safety, by which we mean satisfaction of safety over some finite time horizon T > 0 (see Figure I.1). For an agent starting from a certain subset of the safe region, one can always find a set of policies that render this agent safe up to some finite time. To guarantee limited-duration safety, we propose a limited duration control barrier function (LDCBF). The idea is based on local model-based control that constrains the instantaneous control input every time to restrict the growths of values of LDCBFs by solving a computationally inexpensive quadratic programming (QP) problem. To find an LDCBF, we make use of so-called global value function learning. More specifically, we assign a high cost to unsafe states and a lower cost to safe states, and learn the value function (or discounted infinite-horizon cost) associated with any given policy. Then, it is shown that the value function associated with any given policy is an LDCBF, i.e., a nonempty set of limited-duration safe policies can be obtained (Section IV-B). Contrary to the optimal control and Lyapunovbased approaches that only single out an optimal policy, our learning framework aims at learning a common set of policies that can be shared among different tasks. Thus, our framework can be contextualized within the so-called lifelong learning [11] and transfer learning [12] (or safe transfer; see Section V-B). The rest of this paper is organized as follows: Section II discusses the related work, including constraints-driven control and transfer learning, Section III presents notations, assumptions made in this paper, and some background knowledge. Subsequently, we present our main contributions and their applications in Section IV and Section V, respectively. In Section VI, we first validate LDCBFs on an existing control problem (see Section II). Then, our learning framework is applied to the cart-pole simulation environment in DeepMind Control Suite [13]; safety is defined as keeping the pole from falling down, and we use an LDCBF obtained after learning the balance task to facilitate learning the new task, namely, moving the cart without letting the pole fall down. III. PRELIMINARIES Throughout, R, R ≥0 and Z + are the sets of real numbers, nonnegative real numbers and positive integers, respectively. Let · R d := x, x R d be the norm induced by the inner product x, y R d := x T y for d-dimensional real vectors x, y ∈ R d , where (·) T stands for transposition. In this paper, we consider an agent with system dynamics described by an ordinary differential equation: dx dt = h(x(t), u(t)),(1) where x(t) ∈ R nx and u(t) ∈ U ⊂ R nu are the state and the instantaneous control input of dimensions n x , n u ∈ Z + , and h : R nx × U → R nx . Let X be the state space which is a compact subset of R nx . 1 In this work, we make the following assumptions. Assumption III.1. For any locally Lipschitz continuous policy φ : X → U, h is locally Lipschitz with respect to x. Assumption III.2. The control space U(⊂ R nu ) is a polyhedron. Given a policy φ : X → U and a discount factor β > 0, define the value function associated with the policy φ by V φ,β (x) := ∞ 0 e −βt (x(t))dt, β > 0, where (x(t)) is the immediate cost and x(t) is the trajectory starting from x(0) = x. When V φ,β is continuously differentiable over int(X ), namely, the interior of X , we obtain the Hamilton-Jacobi-Bellman (HJB) equation [44]: βV φ,β (x) = ∂V φ,β (x) ∂x h(x, φ(x)) + (x), x ∈ int(X ).(2) Now, if the immediate cost (x) is positive for all x ∈ X except for the equilibrium, and that a zero-cost state is globally asymptotically stable with the given policy φ : X → U, one can expect that the value function V φ,0 (x) := ∞ 0 (x(t))dt has finite values over X . In this case, the HJB equation becomesV φ,0 (x(t)) := dV φ,0 dt (x(t)) = − (x(t)), i.e., V φ,0 is decreasing over time. As such, it is straightforward to see that V φ,0 is a control Lyapunov function, i.e., there always exists a policy that satisfies the decrease condition for V φ,0 . However, there are two major limitations to this approach: i) one must assume that the agent stabilizes in a zero-cost state by the given policy φ, and ii) forward invariant sublevel sets of the control Lyapunov function usually become too conservative with respect to the given safe states. 2 To remedy these drawbacks, we present our major contribution in the next section. IV. CONSTRAINT LEARNING FOR CONTROL TASKS In this section, we propose limited duration control barrier functions (LDCBFs), and present their properties and a practical way to find an LDCBF. A. Limited Duration Control Barrier Functions Before formally presenting LDCBFs, we give the following definition. Definition IV.1 (Limited-duration safety). Given an open set of safe states C, let B T LD be a closed nonempty subset of C ⊂ X . The dynamical system (1) is said to be safe up to time T , if there exists a policy φ that ensures x(t) ∈ C for all 0 ≤ t < T whenever x(0) ∈ B T LD . Now, we give the definition of the LDCBFs. Definition IV.2. Let a function B LD : X → R ≥0 be continuously differentiable. Suppose that h(x, u) = f (x) + g(x) u, x ∈ X , u ∈ U, and that the set of safe states is given by C := x ∈ X : B LD (x) < L β , L > 0, β > 0.(3) Define the set B T LD = x ∈ X : B LD (x) ≤ Le −βT β ⊂ C, for some T > 0. Define also L f and L g as the Lie derivatives along f and g. Then, B LD is called a limited duration control barrier function for C and for the time horizon T if B T LD is nonempty and if there exists a monotonically increasing locally Lipschitz continuous function 3 α : R → R such that α(0) = 0 and inf u∈U {L f B LD (x) + L g B LD (x)u} ≤ α Le −βT β − B LD (x) + βB LD (x),(4) for all x ∈ C. Given an LDCBF, the admissible control space S T LD (x), x ∈ C, can be defined as S T LD (x) := {u ∈ U : L f B LD (x) + L g B LD (x)u ≤ α Le −βT β − B LD (x) + βB LD (x)}.(5) Given an LDCBF, safety up to time T is guaranteed if the initial state is taken in B T LD and an admissible control is employed as the following theorem claims. Theorem IV.1. Given a set of safe states C defined by (3) and an LDCBF B LD defined on X under Assumption III.1, any locally Lipschitz continuous policy φ : X → U that satisfies φ(x) ∈ S T LD (x), ∀x ∈ C, renders the dynamical system (1) safe up to time T whenever the initial state is in B T LD . Proof. See Appendix A. When h(x, u) = f (x) + g(x)u, x ∈ X , u ∈ U, one can constrain the control input within the admissible control space S T LD (x), x ∈ C, using a locally accurate model via QPs in the same manner as control barrier functions and control Lyapunov functions. Here, we present a general form of control syntheses via QPs. Proposition IV.1. Suppose that h(x, u) = f (x) + g(x)u, x ∈ X , u ∈ U. Given an LDCBF B LD with a locally Lipschitz derivative and the admissible control space S T LD (x * ) at x * ∈ C defined in (5), consider the QP: φ(x * ) = argmin u∈S T LD (x) u T H(x * )u + 2b(x * ) T u,(6) where H and b are Lipschitz continuous at x * ∈ C, and H(x * ) = H T (x * ) is positive definite. If the width 4 of a feasible set is strictly larger than zero, then, under Assumption III.2, the policy φ(x) defined in (6) is unique and Lipschitz continuous with respect to the state at x * . Proof. Slight modifications of [45, Theorem 1] proves the proposition. To see an advantage of considering LDCBFs, we show that an LDCBF can be obtained systematically as described next. B. Finding a Limited Duration Control Barrier Function We present a possible way to find an LDCBF B LD for the set of safe states through global value function learning. Let (x) ≥ 0, ∀x ∈ X , and suppose that the set of safe states is given by C := {x ∈ X : (x) < L} , L > 0.(7) Given the dynamical system defined in Definition IV.2, consider the virtual systeṁ x(t) = f (x(t)) + g(x(t))φ(x(t)) if x(t) ∈ C otherwise 0,(8) for a policy φ. Assume that we employ a continuously differentiable function approximator to approximate the value function V φ,β for the virtual system, and letV φ,β denote an approximation of V φ,β . By using the HJB equation (2), define the estimated immediate cost functionˆ aŝ (x) = βV φ,β (x) − L fV φ,β (x)−L gV φ,β (x)φ(x), ∀x ∈ C. Select c ≥ 0 so thatˆ c (x) :=ˆ (x) + c ≥ 0 for all x ∈ C, and define the functionV φ,β c (x) :=V φ,β (x) + c β . Then, the following theorem holds. Theorem IV.2. Consider the set B T LD = x ∈ X :V φ,β c (x) ≤L e −βT β , whereL := inf y ∈ X \ C βV φ,β c (y). IfB T LD is nonempty, then the dynamical system starting from the initial state inB T LD is safe up to time T when the policy φ is employed, andV φ,β c (x) is an LDCBF for the set C := x ∈ X :V φ,β c (x) <L β ⊂ C. Proof. See Appendix C. Remark IV.1. We need to considerV φ,β c instead of V φ,β because the immediate cost function and the virtual system (8) need to be sufficiently smooth to guarantee that V φ,β is continuously differentiable. In practice, the choice of c andL affects conservativeness of the set of safe states. Note, to enlarge the set B T LD , the immediate cost (x) is preferred to be close to zero for x ∈ C, and L needs to be sufficiently large. Example IV.1. As an example of finding an LDCBF, we use a deep neural network. Suppose the discrete-time transition is given by (x n , u n , (x n ), x n+1 ), where n ∈ 0 ∪ Z + is the time instant. Then, by executing a given policy φ, we store the negative data, where x n+1 / ∈ C, and the positive data, where x n+1 ∈ C, separately, and conduct prioritized experience replay [46,47]. Specifically, initialize a target networkV φ,β Target and a local networkV φ,β Local , and update the local network by sampling a random minibatch of N negative and positive transitions {(x ni , u ni , (x ni ), x ni+1 )} i∈{1,2,...,N } to minimize 1 N N i=1 (y ni −V φ,β Local (x ni )), where y ni = (x ni ) + γV φ,β Target (x ni+1 ) x ni+1 ∈ C, L 1−γ x ni+1 / ∈ C. Here, γ ≈ − log (β)/∆ t is a discount factor for a discretetime case, where ∆ t is the time interval of one time step. The target network is soft-updated using the local network byV φ,β Target ← µV φ,β Local + (1 − µ)V φ,β Target for µ 1. One can transform the learned local network to a continuous-time form via multiplying it by ∆ t . Although we cannot ensure forward invariance of the set C using LDCBFs, the proposed approach is still set theoretic. As such, we can consider the compositions of LDCBFs. C. Compositions of Limited Duration Barrier Functions The Boolean compositional CBFs were studied in [23,48]. In [23], max and min operators were used for the Boolean operations, and nonsmooth barrier functions were proposed out of necessity. However, it is known that, even if two sets C 1 ⊂ X and C 2 ⊂ X are controlled invariant [49, page 21] for the dynamical system (1), the set C 1 ∩ C 2 is not necessarily controlled invariant, while C 1 ∪C 2 is indeed controlled invariant [49, Proposition 4.13]. We can make a similar assertion for limited-duration safety as follows. Proposition IV.2. Assume that there exists a limited-duration safe policy for each set of safe states C j ⊂ X , j ∈ {1, 2, . . . , J}, J ∈ Z + , that renders an agent with the dynamical system (1) safe up to time T whenever starting from inside a closed nonempty set B LDj ⊂ C j . Then, given the set of safe states C := J j=1 C j , there also exists a policy rendering the dynamical system safe up to time T whenever starting from any state in B LD := J j=1 B LDj . Proof. A limited-duration safe policy for C j also keeps the agent inside the set C up to time T when starting from inside B LDj . If there exist LDCBFs for C j s, it is natural to ask if there exists an LDCBF for C. Because of the nonsmoothness stemming from Boolean compositions, however, obtaining an LDCBF for C requires an additional learning in general (see Appendix F). Also, existence of an LDCBF for the intersection of multiple sets of safe states is not guaranteed, and we need an additional learning as well. So far, we have seen a possible way to obtain an LDCBF for a given set of safe states expressed as in (7). As our approach is set-theoretic rather than specifying a single optimal policy, it is also compatible with the constraints-driven control and transfer learning as described in the next section. V. APPLICATIONS In this section, we present two practical examples that illustrate benefits of considering LDCBFs, namely, long-duration autonomy and transfer learning. A. Applications to Long-duration Autonomy In many applications, guaranteeing particular properties (e.g., forward invariance) over an infinite-time horizon is difficult or some forms of approximations are required. Specifically, when a function approximator is employed, there will certainly be an approximation error. Nevertheless, it is often sufficient to guarantee safety up to certain finite time, and our proposed LDCBFs act as useful relaxations of CBFs. To see that one can still achieve long-duration autonomy by using LDCBFs, we consider the settings of work in [27]. Suppose that the state x := [E, p T ] T ∈ R 3 has the information of energy level E ∈ R ≥0 and the position p ∈ R 2 of an agent. Suppose also that E max > 0 is the maximum energy level and ρ(p) ≥ 0 (equality holds only when the agent is at a charging station) is the energy required to bring the agent to a charging station from the position p ∈ R 2 . Then, although we emphasize that we can obtain an LDCBF by value function learning if necessary, let us assume that the function B LD (x) := E max − E + ρ(p) ≥ 0 is an LDCBF, for simplicity. Then, by letting L = β(E max − E min ) for some β > 0 and for the minimum necessary energy level E min , 0 ≤ E min < E max , the set of safe states can be given by C := x ∈ X : B LD (x) < L β . Now, under these settings, the following proposition holds. Proposition V.1. Assume that the energy dynamics is lower bounded by dE dt ≥ −K d , ∃K d > 0, which implies that the least exit timeT energy (E) of E being below E min iŝ T energy (E) = (E − E min ) K d . Also, suppose we employ a locally Lipschitz continuous policy φ that satisfies the LDCBF condition using a battery dynamics model dÊ dt = −K d . Then, by taking T >T energy (E 0 ) for the initial energy level E 0 > E min and letting B T LD := x ∈ X : B LD (x) ≤ Le −βT β ⊂ C, the agent starting from a state in B T LD will reach the charging station before the energy reaches to E min . Proof. See Appendix D. Hence, LDCBFs are shown to be applicable to some cases where it is difficult to guarantee that certain properties hold over infinite horizons, but where limited-duration safety suffices. One of the benefits of using LDCBFs is that, once a set of limited-duration safe policies or good enough policies is obtained, one can reuse them for different tasks. Therefore, given that we can obtain an LDCBF through global value function learning for a policy that is not necessarily stabilizing the system, it is natural to ask if one can employ LDCBFs to transfer knowledge. Indeed, LDCBFs also have good compatibility with transfer learning (or safe transfer) as discussed below. B. Applications to Transfer Learning Given the immediate cost function, reinforcement learning aims at finding an optimal policy. Clearly, the obtained policy is not optimal with respect to a different immediate cost function in general. Therefore, employing the obtained policy straightforwardly in different tasks makes no sense. However, it is quite likely that the obtained policy is good enough even in different tasks because some mandatory constraints such as avoiding unsafe regions of the state space are usually shared among different tasks. Therefore we wish to exploit constraint learning for the sake of transfer learning, i.e., we learn constraints which are common to the target task while learning source tasks. Definition V.1 (Transfer learning, [12, modified version of Definition 1]). Given a set of training data D S for one task (i.e., source task) denoted by T S (e.g., an MDP) and a set of training data D T for another task (i.e., target task) denoted by T T , transfer learning aims to improve the learning of the target predictive function f T (i.e., a policy in our example) in D T using the knowledge in D S and T S , where D S = D T , or T S = T T . For example, when learning an optimal policy for the balance task of the cart-pole problem, one can simultaneously learn a set of limited-duration safe policies that keep the pole from falling down up to certain time T > 0. The set of these limited-duration safe policies is obviously useful for other tasks such as moving the cart to one direction without letting the pole fall down. Here, we present a possible application of limited duration control barrier functions to transfer learning. We take the following steps: 1) Design J ∈ Z + cost functions j , j ∈ {1, 2, . . . , J}, each of which represents a constraint by defining a set of safe state C j := {x ∈ X : j (x) < L} , L > 0, j ∈ {1, 2, . . . , J}. 2) Conduct reinforcement learning for a cost function by using any of the reinforcement learning techniques. 3) No matter the currently obtained policy φ is optimal or not, one can obtain an LDCBF B LDj for each set C j , j ∈ {1, 2, . . . , J}. More specifically, B LDj is given by B LDj (x) := V φ,β j (x) := ∞ 0 e −βt j (x(t))dt, j ∈ {1, 2, . . . , J}, for x(0) = x. 4) When learning a new task, policies are constrained by LDCBFs depending on which constraints are common to the new task. We study some practical implementations. Given LDCBFs B LDj , j ∈ {1, 2, ..., J}, define the set Φ T j of admissible policies as Φ T j := {φ :φ(x) ∈ S T LDj (x), ∀x ∈ C j } ⊂ Φ := {φ : φ(x) ∈ U, ∀x ∈ X }, where S T LDj (x) is the set of admissible control inputs at x for the jth constraint. If an optimal policy φ T T for the target task T T is included in Φ T j , one can conduct learning for the target task within the policy space Φ T j . If not, one can still consider Φ T j as a soft constraint and can explore the policy space Φ \ Φ T j with a given probability or can just select the initial policy from Φ T j . In practice, a parametrized policy is usually considered; a policy φ θ expressed by a parameter θ is updated via policy gradient methods [50]. If the policy is in the linear form with a fixed feature vector, the projected policy gradient method [51] can be used. Thanks to the fact that an LDCBF defines an affine constraint on instantaneous control inputs if the system dynamics is affine in control, the projected policy gradient method looks like θ ← Γ j [θ + λ∇ θ F T T (θ)]. Here, Γ j : Φ → Φ T j projects a policy onto the affine constraint defined by the jth constraint and F T T (θ) is the objective function for the target task which is to be maximized. For the policy not in the linear form, one may update policies based on LDCBFs by modifying the deep deterministic policy gradient (DDPG) method [52]: because through LDCBFs, the global property (i.e., limited-duration safety) is ensured by constraining local control inputs, it suffices to add penalty terms to the cost when updating a policy using samples. For example, one may employ the log-barrier extension proposed in [53], which is a smooth approximation of the hard indicator function for inequality constraints but is not restricted to feasible points. VI. EXPERIMENTS In this section, we validate our learning framework. First, we show that LDCBFs indeed work for the constraints-driven control problem considered in Section V-A by simulation. Then, we apply LDCBFs to a transfer learning problem for the cart-pole simulation environment. A. Constraints-driven coverage control of multi-agent systems Let the parameters be E max = 1.0, E min = 0.55, K d = 0.01, β = 0.005 and T = 50.0 > 45.0 = (E max − E min )/K d . We consider six agents (robots) with single integrator dynamics. An agent of the position p i := [x i , y i ] T is assigned a charging station of the position p charge,i , where x and y are the X position and the Y position, respectively. When the agent is close to the station (i.e., p i − p charge,i R 2 ≤ 0.05), it remains there until the battery is charged to E ch = 0.92. Actual battery dynamics is given by dE/dt = −0.01E. The coverage control task is encoded as Lloyd's algorithm [54] aiming at converging to the Centroidal Voronoi Tesselation, but with a soft margin so that the agent prioritizes the safety constraint. The locational cost used for the coverage control task is given as follows [55]: MATLAB simulation (the simulator is provided on the Robotarium [56] website: www.robotarium.org), we used the random seed rng(5) for determining the initial states. Note, for every agent, the energy level and the position are set so that it starts from inside the set B T LD . Figure VI.1 shows (a) the images of six agents executing coverage tasks and (b) images of the agents three of which are charging their batteries. Figure VI-B shows the simulated battery voltage data of the six agents, from which we can observe that LDCBFs worked effectively for the swarm of agents to avoid depleting their batteries. 6 i=1 Vi(p) p i −p 2 ϕ(p)dp, where V i (p) = {p ∈ X : p i −p ≤ p j −p , ∀j = i} is the Voronoi cell for the agent i. In particular, we used ϕ([x,ŷ] T ) = e −{(x−0.2) 2 +(ŷ−0.3) 2 }/0.06 + 0.5e −{(x+0.2) 2 +(ŷ+0.1) 2 }/0.03 . In B. Transfer from Balance to Move: Cart-pole problem Next, we apply LDCBFs to transfer learning. The simulation environment and the deep learning framework used in this experiment are "Cart-pole" in DeepMind Control Suite and PyTorch [57], respectively. We take the following steps: 1) Learn a policy that successfully balances the pole by using DDPG [52]. 2) Learn an LDCBF by using the obtained actor network. 3) Try a random policy with the learned LDCBF and a (locally) accurate model to see that LDCBF works reasonably. 4) With and without the learned LDCBF, learn a policy that moves the cart to left without letting the pole fall down, which we refer to as move-the-pole task. The parameters used for this experiment are summarized in Table VI.1. Here, angle threshold stands for the threshold of cos ψ where ψ is the angle of the pole from the standing position, and position threshold is the threshold of the cart position p. The angle threshold and the position threshold are used to terminate an episode. Note that the cart-pole environment of MuJoCo [58] xml data in DeepMind Control Suite is modified so that the cart can move between −3.8 and 3.8. As presented in Example IV.1, we use prioritized experience replay when learning an LDCBF. Specifically, we store the positive and the negative data, and sample 4 data points from the positive one and the remaining 60 data points from the negative one. In this experiment, actor, critic and LDCBF networks use ReLU nonlinearities. The actor network and the LDCBF network consist of two layers of 300 → 200 units, and the critic network is of two layers of 400 → 300 units. The control input vector is concatenated to the state vector from the second critic layer. Step1: The average duration (i.e., the first exit time, namely, the time when the pole first falls down) out of 10 seconds (corresponding to 1000 time steps) over 10 trials for the policy learned through the balance task by DDPG was 10 seconds. Step2: Then, by using this successfully learned policy, an LDCBF is learned by assigning the cost (x) = 1.0 for cos ψ < 0.2 and (x) = 0.1 elsewhere. Also, because the LDCBF is learned in a discrete-time form, we transform it to a continuous-time form via multiplying it by ∆ t = 0.01. When learning an LDCBF, we initialize each episode as follows: the angle ψ is uniformly sampled within −1.5 ≤ ψ ≤ 1.5, the cart velocityṗ is multiplied by 100 and the angular velocityψ is multiplied by 200 after being initialized by DeepMind Control Suite. The LDCBF learned by using this policy is illustrated in Figure VI Step3: To test this LDCBF, we use a uniformly random policy (φ(x) takes the value between −1 and 1) constrained by the LDCBF with the function α(q) = max {0.1q, 0} and with the time constant T = 5.0. When imposing constraints, we use the (locally accurate) control-affine model of the cart-pole in the work [59], where we replace the friction parameters by zeros for simplicity. The average duration out of 10 seconds over 10 trials for this random policy was 10 seconds, which indicates that the LDCBF worked sufficiently well. We also tried this LDCBF with the function α(q) = max {3.0q, 0} and T = 5.0, which resulted in the average duration of 5.58 seconds. Moreover, we tried the fixed policy φ(x) = 1.0, ∀x ∈ X , with the function α(q) = max {0.1q, 0} and T = 5.0, and the average duration was 4.73 seconds, which was sufficiently close to T = 5.0. Step4: For the move-the-pole task, we define the success by the situation where the cart position p, −3.8 ≤ p ≤ 3.8, ends up in the region of p ≤ −1.8 without letting the pole fall down. The angle ψ is uniformly sampled within −0.5 ≤ ψ ≤ 0.5 and the rest follow the initialization of DeepMind Control Suite. The reward is given by (1 + cos ψ)/2×(utils.rewards.tolerance(ṗ + 1.0, bounds = (−2.0, 0.0), margin = 0.5)), where utils.rewards.tolerance is the function defined in [13]. In other words, to move the pole to left, we give high rewards when the cart velocity is negative and the pole is standing up. To use the learned LDCBF for DDPG, we store matrices and vectors used in linear constraints along with other variables such as control inputs and states, which are then used for experience replay. Then, the logbarrier extension cost proposed in [53] is added when updating policies. Also, we try DDPG without using LDCBF for the move-the-pole task. Both approaches initialize the policy by the one obtained after the balance task. The average success rates of the policies obtained after the numbers of episodes up to 15 over 10 trials are given in Table VI.2 for DDPG with the learned LDCBF and DDPG without LDCBF. This result implies that our proposed approach successfully transferred information from the source task to the target task by sharing a common safety constraint. VII. CONCLUSION In this paper, we presented a notion of limited-duration safety as a relaxation of forward-invariance of a set of safe states. Then, we proposed limited-duration control barrier functions that are used to guarantee limited-duration safety by using locally accurate model of agent dynamics. We showed that LDCBFs can be obtained through global value function learning, and analyzed some of their properties. LDCBFs were validated through persistent coverage control tasks and were successfully applied to a transfer learning problem via sharing a common state constraint. APPENDIX A PROOF OF THEOREM IV.1 Under Assumption III.1, the trajectories x(t) with an initial condition x(0) ∈ B T LD exist and are unique over t ≥ 0. Let the first time at which the trajectory x(t) exits C be T e > 0 and let T p , 0 ≤ T p < T e , denote the last time at which the trajectory x(t) passes through the boundary of B T LD from inside before first exiting C. Since α is locally Lipschitz continuous and B LD is continuously differentiable, the right hand side of (4) is locally Lipschitz continuous. Thus solutions to the differential equatioṅ r(t) = α Le −βT β − r(t) + βr(t), where the initial condition is given by r(T p ) = B LD (x(T p )), exist and are unique for all t, T p ≤ t ≤ T e . On the other hand, the solution tȯ s(t) = βs(t), where the initial condition is given by s( T p ) = B LD (x(T p )) = Le −βT β , is s(t) = B LD (x(T p ))e β(t−Tp) , ∀t ≥ T p . It thus follows that s(T p + T ) = L β e −βT e βT = L β , and T p +T is the first time at which the trajectory s(t), t ≥ T p , exits C. Because α( Le −βT β − r(t)) ≤ 0, T p ≤ t ≤ T e , we obtain, by the Comparison Lemma [10], [60,Theorem 1.10.2], B LD (x(t)) ≤ r(t) ≤ s(t) for all t, T p ≤ t ≤ T e . If we assume T e < T p +T , it contradicts the fact that B LD (x(T e )) ≤ s(T e ) < s(T p + T ) = L β , and hence T e ≥ T p + T . Therefore, we conclude that any Lipschitz continuous policy φ : X → U such that φ(x) ∈ S T LD (x), ∀x ∈ C, renders the dynamical system safe up to time T p + T (≥ T ) whenever the initial state x(0) is in B T LD . APPENDIX B ON PROPOSITION IV.1 The width of a feasible set is defined as the unique solution to the following linear program: u ω (x) = max [u T ,ω] T ∈R nu+1 ω (B.1) s.t. L f B LD (x) + L g B LD (x)u + ω ≤ α Le −βT β − B LD (x) + βB LD (x) u + [ω, ω . . . , ω] T ∈ U APPENDIX C PROOF OF THEOREM IV.2 Because, by definition, V φ,β c (x) ≥L β , ∀x ∈ X \ C, it follows that C = x ∈ X :V φ,β c (x) <L β ⊂ C. Because the continuously differentiable functionV φ,β c (x) sat- isfiesV φ,β c (x) = L fV φ,β c (x) + L gV φ,β c (x)φ(x) = βV φ,β c (x) −ˆ c (x), ∀x ∈ C, andˆ c (x) ≥ 0, ∀x ∈ C, there exists at least one policy φ that satisfies φ(x) ∈ S T LD (x) = {u ∈ U : L fV φ,β c (x) + L gV φ,β c (x)u ≤ α L e −βT β −V φ,β c (x) + βV φ,β c (x)}, for all x ∈ C and for a monotonically increasing locally Lipschitz continuous function α such that α(q) = 0, ∀q ≤ 0. Therefore,V φ,β c is an LDCBF for the setĈ. Remark C.1. A sufficiently large constant c could be chosen in practice. Ifˆ c (x) > 0 for all x ∈ C and the value function is learned by using a policy φ such that φ(x)+[c φ , c φ . . . , c φ ] T ∈ U for some c φ > 0, then the unique solution to the linear program (B.1) satisfies u ω (x) > 0 for any x ∈ C. APPENDIX D PROOF OF PROPOSITION V.1 Under Assumption III.1, the trajectories x(t) with an initial condition x(0) ∈ B T LD satisfying the given LDCBF condition (that uses the battery dynamics model dÊ/dt = −K d ) exist and are unique over t ≥ 0. Let the actual exit time of E being below E min be T energy > 0, and let the first time at which the trajectory x(t) exits C be T e > 0 (T e = ∞ if the agent never exits C). Note it holds that T e ≤ T energy . Also, let E t and ρ t be the trajectories of E and ρ(p). Definê T e := min T e , inf {t : ρ t = 0 ∧ x(t) / ∈ B T LD } , and T p := max t : x(t) ∈ ∂B T LD ∧ t ≤T e , where ∂B T LD denotes the boundary of B T LD . Now, if E t = E min and ρ t > 0 for some t < T p , it contradicts the fact that B LD (x(t)) = E max − E t + ρ t < E max − E min (∵ x(t) ∈ C, ∀t < T p ) . Therefore, it follows that E t > E min or ρ t = 0 for all t < T p . This implies that we should only consider the time t ≥ T p by assuming that ρ Tp ≥ 0. LetÊ t be the trajectory following the virtual battery dynamics dÊ/dt = −K d andÊ Tp = E Tp , and let s(t) be the unique solution tȯ s(t) = βs(t), t ≥ T p , where s(T p ) = B LD (x(T p )) = (E max − E min )e −βT . Also, let (t) = s(t) +Ê t − E max , t ≥ T p . Then, the time at which s(t) becomes E max − E min is T p + T because s(T + T p ) = B LD (x(T p ))e β(T +Tp−Tp) = (E max − E min )e −βT e β(T +Tp−Tp) = E max − E min . Sincê T energy (E Tp ) ≤T energy (E 0 ) < T and (t) = B LD (x(T p ))e β(t−Tp) +Ê Tp − K d (t − T p ) − E max , we obtain T 0 := inf {t : (t) = 0 ∧ t ≥ T p } ≤ T p +T energy (E Tp ). On the other hand, the actual battery dynamics can be written as dE/dt = −K d +∆(x), where ∆(x) ≥ 0. Therefore, we see that the trajectory x(t) satisfies dB LD (x) dt ≤ βB LD (x) − ∆(x), ∀t, T p ≤ t ≤T e . Then, because d (B LD (x(t)) − s(t)) dt ≤ β (B LD (x(t)) − s(t)) − ∆(x(t)) ≤ β (B LD (x(t)) − s(t)) , ∀t, T p ≤ t ≤T e , and β (B LD (x(T p )) − s(T p )) = 0, we obtain B LD (x(t)) − s(t) ≤ − t 0 ∆(x(t))dt, ∀t, T p ≤ t ≤T e . Also, it is straightforward to see that T energy ≥ T e ≥ T p + T ≥ T p +T energy (E Tp ). From the definitions of B LD and (t), it follows that ρ t − (t) = B LD (x(T p )) − s(t) + E t −Ê t ≤ − t 0 ∆(x(t))dt + t 0 ∆(x(t))dt = 0, ∀t, T p ≤ t ≤T e , which leads to the inequality ρ t ≤ (t) for all t, T p ≤ t ≤T e . Hence, we conclude that T := inf {t : ρ t = 0 ∧ t ≥ T p } ≤T 0 ≤ T p +T energy (E Tp ) ≤ T energy , andT e =T , which proves the proposition. APPENDIX E STOCHASTIC LIMITED DURATION CONTROL BARRIER FUNCTIONS For the system dynamics described by a stochastic differential equation: dx = h(x(t), u(t))dt + η(x(t))dw, (E.1) where h : R nx × U → R nx is the drift, η : R nx → R nx×nw is the diffusion, w is a Brownian motion of dimension n w ∈ Z + , it is often impossible to guarantee safety with probability one without making specific assumptions on the dynamics. Therefore, we instead consider an upper bound on the probability that a trajectory escape from the set of safe states within a given finite time. We give the formal definition below. Definition E.1 (Limited-duration safety for stochastic systems). Let B T,δ SLD be a closed nonempty subset of a open set of safe states C S and τ the first exit time τ := inf{t : x(t) = C S }. Then, the stopped processx(t) defined bỹ x(t) := x(t), t < τ, x(τ ), t ≥ τ, (E.2) where x(t) evolves by (E.1), is safe up to time T > 0 with probability δ, 0 ≤ δ ≤ 1, if there exists a policy φ which ensures that P x(t) = C S for some t, 0 ≤ t ≤ T : x(0) ∈ B T,δ SLD ≤ 1 − δ. To present stochastic limited duration control barrier functions (SLDCBFs) that are stochastic counterparts of LDCBFs, we define the infinitesimal generator G, for a function B SLD : X → R ≥0 , by G(B SLD )(x) := − 1 2 tr ∂ 2 B SLD (x) ∂x 2 η(x)η(x) T − ∂B SLD (x) ∂x h(x, φ(x)), x ∈ int(X ), where tr stands for the trace. Also, we make the following assumption. Then, the following theorem holds. Theorem E.1. Given T > 0 and δ, 0 ≤ δ ≤ 1, define a set of safe states C S := x ∈ X : B SLD (x) < L β , L > 0, β > 0, for a twice continuously differentiable function B SLD : X → R ≥0 . Define also the set B T,δ SLD as B T,δ SLD := x ∈ X : B SLD (x) ≤ (1 − δ) Le −βT β ⊂ C S . If B T,δ SLD is nonempty and if there exists a Lipschitz continuous policy φ : X → U satisfying φ(x) ∈ S T SLD := {u ∈ U : −G(B SLD )(x) ≤ βB SLD (x)}, for all x ∈ C S , then, under Assumption E.1, the policy φ renders the stopped processx(t) in (E.2) safe up to time T with probability δ. Proof. DefineB SLD : R ≥0 → R ≥0 as B SLD (t) := e −βt B SLD (x(t)) . Becausex(t) is an E x B SLD (x(t)) −B SLD (x(0)) = E x    t 0 −e −βt [G(B SLD ) + βB SLD ] (x(s)) ≤0 ds    for any t, 0 ≤ t < ∞, from which it follows that E x B SLD (x(t)) ≤B SLD (x(0)). Therefore,B SLD is a supermartingale with respect to the filtration {M t : t ≥ 0} generated byx(t) because B SLD (x(t)) is twice continuously differentiable and C S is bounded. We thus obtain [62, p.25] P sup 0 ≤ t ≤ T B SLD (x(t)) ≥ L β ≤ P sup 0 ≤ t ≤ TB SLD (t) ≥ Le −βT β ≤ βe βTB SLD (0) L = βe βT B SLD (x(0)) L ≤ 1 − δ. To make a claim similar to Theorem IV.2, we consider the following value function associated with the policy φ: V φ,β (x) := E x ∞ 0 e −βt (x(t))dt, β ≥ 0, where (x(t)) is the immediate cost and E x is the expectation for all trajectories (time evolutions of x(t)) starting from x = x(0). When V φ,β is twice continuously differentiable over int(X ), we obtain the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation [44]: βV φ,β (x) = −G(V φ,β )(x) + (x), x ∈ int(X ). (E.3) Given the set of safe states C S := {x ∈ X : (x) < L} , L > 0, for (x) ≥ 0, ∀x ∈ X , and the stopped process (E.2), assume that we employ a twice continuously differentiable function approximator to approximate the value function V φ,β for the stopped process, and letV φ,β denote the approximation of V φ,β . By using the HJBI equation (E.3), define the estimated immediate cost functionˆ aŝ (x) = βV φ,β (x) + G(V φ,β )(x), ∀x ∈ C S . Select c ≥ 0 so thatˆ c (x) :=ˆ (x) + c ≥ 0 for all x ∈ C S , and define the functionV φ,β c (x) :=V φ,β (x) + c β . Now, the following theorem holds. The function B LD1 ∨B LD2 is, however, nonsmooth in general. Therefore, even if we consider differential inclusion and associated Carathéodory solutions as in [23,63], there might exist sliding modes that violate inequalities imposed by a function B LD1 ∨B LD2 at x ∈ Ω f g+φ . Here, Ω f +gφ represents the zeromeasure set where the dynamics is nondifferentiable (see [64] for detailed arguments, for example). Nevertheless, to obtain a smooth LDCBF for C, we may obtain a smooth approximation of a possibly discontinuous policy φ that satisfies dB LDj (x * ) dt ≤ α Le −βT β − B LDj (x * ) + βB LDj (x * ), for all x * such that B LD1 (x * ) < B LD2 (x * ) for j = 1 and B LD2 (x * ) < B LD1 (x * ) for j = 2. Then, we can conduct value function learning to obtain an LDCBF for C with a set B T LD that is possibly smaller than B T LD .
7,890
1908.09506
2969721933
When deploying autonomous agents in unstructured environments over sustained periods of time, adaptability and robustness oftentimes outweigh optimality as a primary consideration. In other words, safety and survivability constraints play a key role and in this paper, we present a novel, constraint-learning framework for control tasks built on the idea of constraints-driven control. However, since control policies that keep a dynamical agent within state constraints over infinite horizons are not always available, this work instead considers constraints that can be satisfied over a sufficiently long time horizon T > 0, which we refer to as limited-duration safety. Consequently, value function learning can be used as a tool to help us find limited-duration safe policies. We show that, in some applications, the existence of limited-duration safe policies is actually sufficient for long-duration autonomy. This idea is illustrated on a swarm of simulated robots that are tasked with covering a given area, but that sporadically need to abandon this task to charge batteries. We show how the battery-charging behavior naturally emerges as a result of the constraints. Additionally, using a cart-pole simulation environment, we show how a control policy can be efficiently transferred from the source task, balancing the pole, to the target task, moving the cart to one direction without letting the pole fall down.
Besides, transfer learning (cf. @cite_4 ) aims at learning a new task by utilizing the knowledge already acquired via learning other tasks, and is sometimes referred to as "lifelong learning" @cite_54 and "learning to learn" @cite_57 . In transfer learning for reinforcement learning contexts, we first learn a set of source tasks , and speed up learning of a target task (see @cite_51 for example). When the source tasks and the target task have hierarchical structures, it is often called hierarchical reinforcement learning (e.g., @cite_19 @cite_25 @cite_35 ). Other examples include meta-learning (e.g., @cite_24 ) that considers so-called the task distribution. Our work can also be used as a transfer learning technique that uses a set of good enough policies as useful information shared among other tasks.
{ "abstract": [ "We develop a met alearning approach for learning hierarchically structured poli- cies, improving sample efficiency on unseen tasks through the use of shared primitives—policies that are executed for large numbers of timesteps. Specifi- cally, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.", "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "Learning provides a useful tool for the automatic design of autonomous robots. Recent research on learning robot control has predominantly focussed on learning single tasks that were studied in isolation. If robots encounter a multitude of control learning tasks over their entire lifetime there is an opportunity to transfer knowledge between them. In order to do so, robots may learn the invariants and the regularities of their individual tasks and environments. This task-independent knowledge can be employed to bias generalization when learning control, which reduces the need for real-world experimentation. We argue that knowledge transfer is essential if robots are to learn control with moderate learning times in complex scenarios. Two approaches to lifelong robot learning which both capture invariant knowledge about the robot and its environments are presented. Both approaches have been evaluated using a HERO-2000 mobile robot. Learning tasks included navigation in unknown indoor environments and a simple find-and-fetch task.", "Preface. Part I: Overview Articles. 1. Learning to Learn: Introduction and Overview S. Thrun, L. Pratt. 2. A Survey of Connectionist Network Reuse Through Transfer L. Pratt, B. Jennings. 3. Transfer in Cognition A. Robins. Part II: Prediction. 4. Theoretical Models of Learning to Learn J. Baxter. 5. Multitask Learning R. Caruana. 6. Making a Low-Dimensional Representation Suitable for Diverse Tasks N. Intrator, S. Edelman. 7. The Canonical Distortion Measure for Vector Quantization and Function Approximation J. Baxter. 8. Lifelong Learning Algorithms S. Thrun. Part III: Relatedness. 9. The Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Based on a Measure of Relatedness D.L. Silver, R.E. Mercer. 10. Clustering Learning Tasks and the Selective Cross-Task Transfer of Knowledge S. Thrun, J. O'Sullivan. Part IV: Control. 11. CHILD: A First Step Towards Continual Learning M.B. Ring. 12. Reinforcement Learning with Self-Modifying Policies J. Schmidhuber, et al 13. Creating Advice-Taking Reinforcement Learners R. Maclin, J.W. Shavlik. Contributing Authors. Index.", "This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics--as a subroutine hierarchy--and a declarative semantics--as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and define subtasks that achieve these subgoals. By defining such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this nonhierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.", "We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.", "The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.", "Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting." ], "cite_N": [ "@cite_35", "@cite_4", "@cite_54", "@cite_57", "@cite_19", "@cite_24", "@cite_51", "@cite_25" ], "mid": [ "2963161674", "2165698076", "1991564165", "99485931", "2121517924", "2604763608", "2097381042", "1592847719" ] }
Constraint Learning for Control Tasks with Limited Duration Barrier Functions
Acquiring an optimal policy that attains the maximum return over some time horizon is of primary interest in the literature of both reinforcement learning [1][2][3] and optimal control [4]. A large number of algorithms have been designed to successfully control systems with complex dynamics to accomplish specific tasks, such as balancing an inverted pendulum and letting a humanoid robot run to a target location. Those algorithms may result in control strategies that are energy-efficient, take the shortest path to the goal, spend less time to accomplish the task, and sometimes outperform human beings in these senses (cf. [5]). As we can observe in the daily life, on the other hand, it is often difficult to attribute optimality to human M. Ohnishi is with the the RIKEN Center for Advanced Intelligence Project, Tokyo, Japan, and with the Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA (email: mohnishi@cs.washington.edu). G. Notomista is with the School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA (e-mail: g.notomista@gatech.edu). M. Sugiyama is with the RIKEN Center for Advanced Intelligence Project, Tokyo, Japan, and with the Department of Complexity Science and Engineering, the University of Tokyo (e-mail: sugi@k.u-tokyo.ac.jp). M. Egerstedt is with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA (e-mail: mag-nus@gatech.edu). behaviors, e.g., the behaviors are hardly the most efficient for any specific task (cf. [6]). Instead, humans are capable of generalizing the behaviors acquired through completing a certain task to deal with unseen situations. This fact casts a question of how one should design a learning algorithm that generalizes across tasks rather than focuses on a specific one. In this paper, we hypothesize that this can be achieved by letting the agents acquire a set of good enough policies when completing one task instead of finding a single optimal policy, and reuse this set for another task. Specifically, we consider safety, which refers to avoiding certain states, as useful information shared among different tasks, and we regard limited-duration safe policies as good enough policies. Our work is built on the idea of constraints-driven control [7,8], a methodology for controlling agents by telling them to satisfy constraints without specifying a single optimal path. If feasibility of the assigned constraints is guaranteed, this methodology avoids recomputing an optimal path when a new task is given but instead enables high-level compositions of constraints. However, state constraints are not always feasible and arbitrary compositions of constraints cannot be validated in general [9]. We tackle this feasibility issue by relaxing safety (i.e., forward invariance [10] of the set of safe states) to limited-duration safety, by which we mean satisfaction of safety over some finite time horizon T > 0 (see Figure I.1). For an agent starting from a certain subset of the safe region, one can always find a set of policies that render this agent safe up to some finite time. To guarantee limited-duration safety, we propose a limited duration control barrier function (LDCBF). The idea is based on local model-based control that constrains the instantaneous control input every time to restrict the growths of values of LDCBFs by solving a computationally inexpensive quadratic programming (QP) problem. To find an LDCBF, we make use of so-called global value function learning. More specifically, we assign a high cost to unsafe states and a lower cost to safe states, and learn the value function (or discounted infinite-horizon cost) associated with any given policy. Then, it is shown that the value function associated with any given policy is an LDCBF, i.e., a nonempty set of limited-duration safe policies can be obtained (Section IV-B). Contrary to the optimal control and Lyapunovbased approaches that only single out an optimal policy, our learning framework aims at learning a common set of policies that can be shared among different tasks. Thus, our framework can be contextualized within the so-called lifelong learning [11] and transfer learning [12] (or safe transfer; see Section V-B). The rest of this paper is organized as follows: Section II discusses the related work, including constraints-driven control and transfer learning, Section III presents notations, assumptions made in this paper, and some background knowledge. Subsequently, we present our main contributions and their applications in Section IV and Section V, respectively. In Section VI, we first validate LDCBFs on an existing control problem (see Section II). Then, our learning framework is applied to the cart-pole simulation environment in DeepMind Control Suite [13]; safety is defined as keeping the pole from falling down, and we use an LDCBF obtained after learning the balance task to facilitate learning the new task, namely, moving the cart without letting the pole fall down. III. PRELIMINARIES Throughout, R, R ≥0 and Z + are the sets of real numbers, nonnegative real numbers and positive integers, respectively. Let · R d := x, x R d be the norm induced by the inner product x, y R d := x T y for d-dimensional real vectors x, y ∈ R d , where (·) T stands for transposition. In this paper, we consider an agent with system dynamics described by an ordinary differential equation: dx dt = h(x(t), u(t)),(1) where x(t) ∈ R nx and u(t) ∈ U ⊂ R nu are the state and the instantaneous control input of dimensions n x , n u ∈ Z + , and h : R nx × U → R nx . Let X be the state space which is a compact subset of R nx . 1 In this work, we make the following assumptions. Assumption III.1. For any locally Lipschitz continuous policy φ : X → U, h is locally Lipschitz with respect to x. Assumption III.2. The control space U(⊂ R nu ) is a polyhedron. Given a policy φ : X → U and a discount factor β > 0, define the value function associated with the policy φ by V φ,β (x) := ∞ 0 e −βt (x(t))dt, β > 0, where (x(t)) is the immediate cost and x(t) is the trajectory starting from x(0) = x. When V φ,β is continuously differentiable over int(X ), namely, the interior of X , we obtain the Hamilton-Jacobi-Bellman (HJB) equation [44]: βV φ,β (x) = ∂V φ,β (x) ∂x h(x, φ(x)) + (x), x ∈ int(X ).(2) Now, if the immediate cost (x) is positive for all x ∈ X except for the equilibrium, and that a zero-cost state is globally asymptotically stable with the given policy φ : X → U, one can expect that the value function V φ,0 (x) := ∞ 0 (x(t))dt has finite values over X . In this case, the HJB equation becomesV φ,0 (x(t)) := dV φ,0 dt (x(t)) = − (x(t)), i.e., V φ,0 is decreasing over time. As such, it is straightforward to see that V φ,0 is a control Lyapunov function, i.e., there always exists a policy that satisfies the decrease condition for V φ,0 . However, there are two major limitations to this approach: i) one must assume that the agent stabilizes in a zero-cost state by the given policy φ, and ii) forward invariant sublevel sets of the control Lyapunov function usually become too conservative with respect to the given safe states. 2 To remedy these drawbacks, we present our major contribution in the next section. IV. CONSTRAINT LEARNING FOR CONTROL TASKS In this section, we propose limited duration control barrier functions (LDCBFs), and present their properties and a practical way to find an LDCBF. A. Limited Duration Control Barrier Functions Before formally presenting LDCBFs, we give the following definition. Definition IV.1 (Limited-duration safety). Given an open set of safe states C, let B T LD be a closed nonempty subset of C ⊂ X . The dynamical system (1) is said to be safe up to time T , if there exists a policy φ that ensures x(t) ∈ C for all 0 ≤ t < T whenever x(0) ∈ B T LD . Now, we give the definition of the LDCBFs. Definition IV.2. Let a function B LD : X → R ≥0 be continuously differentiable. Suppose that h(x, u) = f (x) + g(x) u, x ∈ X , u ∈ U, and that the set of safe states is given by C := x ∈ X : B LD (x) < L β , L > 0, β > 0.(3) Define the set B T LD = x ∈ X : B LD (x) ≤ Le −βT β ⊂ C, for some T > 0. Define also L f and L g as the Lie derivatives along f and g. Then, B LD is called a limited duration control barrier function for C and for the time horizon T if B T LD is nonempty and if there exists a monotonically increasing locally Lipschitz continuous function 3 α : R → R such that α(0) = 0 and inf u∈U {L f B LD (x) + L g B LD (x)u} ≤ α Le −βT β − B LD (x) + βB LD (x),(4) for all x ∈ C. Given an LDCBF, the admissible control space S T LD (x), x ∈ C, can be defined as S T LD (x) := {u ∈ U : L f B LD (x) + L g B LD (x)u ≤ α Le −βT β − B LD (x) + βB LD (x)}.(5) Given an LDCBF, safety up to time T is guaranteed if the initial state is taken in B T LD and an admissible control is employed as the following theorem claims. Theorem IV.1. Given a set of safe states C defined by (3) and an LDCBF B LD defined on X under Assumption III.1, any locally Lipschitz continuous policy φ : X → U that satisfies φ(x) ∈ S T LD (x), ∀x ∈ C, renders the dynamical system (1) safe up to time T whenever the initial state is in B T LD . Proof. See Appendix A. When h(x, u) = f (x) + g(x)u, x ∈ X , u ∈ U, one can constrain the control input within the admissible control space S T LD (x), x ∈ C, using a locally accurate model via QPs in the same manner as control barrier functions and control Lyapunov functions. Here, we present a general form of control syntheses via QPs. Proposition IV.1. Suppose that h(x, u) = f (x) + g(x)u, x ∈ X , u ∈ U. Given an LDCBF B LD with a locally Lipschitz derivative and the admissible control space S T LD (x * ) at x * ∈ C defined in (5), consider the QP: φ(x * ) = argmin u∈S T LD (x) u T H(x * )u + 2b(x * ) T u,(6) where H and b are Lipschitz continuous at x * ∈ C, and H(x * ) = H T (x * ) is positive definite. If the width 4 of a feasible set is strictly larger than zero, then, under Assumption III.2, the policy φ(x) defined in (6) is unique and Lipschitz continuous with respect to the state at x * . Proof. Slight modifications of [45, Theorem 1] proves the proposition. To see an advantage of considering LDCBFs, we show that an LDCBF can be obtained systematically as described next. B. Finding a Limited Duration Control Barrier Function We present a possible way to find an LDCBF B LD for the set of safe states through global value function learning. Let (x) ≥ 0, ∀x ∈ X , and suppose that the set of safe states is given by C := {x ∈ X : (x) < L} , L > 0.(7) Given the dynamical system defined in Definition IV.2, consider the virtual systeṁ x(t) = f (x(t)) + g(x(t))φ(x(t)) if x(t) ∈ C otherwise 0,(8) for a policy φ. Assume that we employ a continuously differentiable function approximator to approximate the value function V φ,β for the virtual system, and letV φ,β denote an approximation of V φ,β . By using the HJB equation (2), define the estimated immediate cost functionˆ aŝ (x) = βV φ,β (x) − L fV φ,β (x)−L gV φ,β (x)φ(x), ∀x ∈ C. Select c ≥ 0 so thatˆ c (x) :=ˆ (x) + c ≥ 0 for all x ∈ C, and define the functionV φ,β c (x) :=V φ,β (x) + c β . Then, the following theorem holds. Theorem IV.2. Consider the set B T LD = x ∈ X :V φ,β c (x) ≤L e −βT β , whereL := inf y ∈ X \ C βV φ,β c (y). IfB T LD is nonempty, then the dynamical system starting from the initial state inB T LD is safe up to time T when the policy φ is employed, andV φ,β c (x) is an LDCBF for the set C := x ∈ X :V φ,β c (x) <L β ⊂ C. Proof. See Appendix C. Remark IV.1. We need to considerV φ,β c instead of V φ,β because the immediate cost function and the virtual system (8) need to be sufficiently smooth to guarantee that V φ,β is continuously differentiable. In practice, the choice of c andL affects conservativeness of the set of safe states. Note, to enlarge the set B T LD , the immediate cost (x) is preferred to be close to zero for x ∈ C, and L needs to be sufficiently large. Example IV.1. As an example of finding an LDCBF, we use a deep neural network. Suppose the discrete-time transition is given by (x n , u n , (x n ), x n+1 ), where n ∈ 0 ∪ Z + is the time instant. Then, by executing a given policy φ, we store the negative data, where x n+1 / ∈ C, and the positive data, where x n+1 ∈ C, separately, and conduct prioritized experience replay [46,47]. Specifically, initialize a target networkV φ,β Target and a local networkV φ,β Local , and update the local network by sampling a random minibatch of N negative and positive transitions {(x ni , u ni , (x ni ), x ni+1 )} i∈{1,2,...,N } to minimize 1 N N i=1 (y ni −V φ,β Local (x ni )), where y ni = (x ni ) + γV φ,β Target (x ni+1 ) x ni+1 ∈ C, L 1−γ x ni+1 / ∈ C. Here, γ ≈ − log (β)/∆ t is a discount factor for a discretetime case, where ∆ t is the time interval of one time step. The target network is soft-updated using the local network byV φ,β Target ← µV φ,β Local + (1 − µ)V φ,β Target for µ 1. One can transform the learned local network to a continuous-time form via multiplying it by ∆ t . Although we cannot ensure forward invariance of the set C using LDCBFs, the proposed approach is still set theoretic. As such, we can consider the compositions of LDCBFs. C. Compositions of Limited Duration Barrier Functions The Boolean compositional CBFs were studied in [23,48]. In [23], max and min operators were used for the Boolean operations, and nonsmooth barrier functions were proposed out of necessity. However, it is known that, even if two sets C 1 ⊂ X and C 2 ⊂ X are controlled invariant [49, page 21] for the dynamical system (1), the set C 1 ∩ C 2 is not necessarily controlled invariant, while C 1 ∪C 2 is indeed controlled invariant [49, Proposition 4.13]. We can make a similar assertion for limited-duration safety as follows. Proposition IV.2. Assume that there exists a limited-duration safe policy for each set of safe states C j ⊂ X , j ∈ {1, 2, . . . , J}, J ∈ Z + , that renders an agent with the dynamical system (1) safe up to time T whenever starting from inside a closed nonempty set B LDj ⊂ C j . Then, given the set of safe states C := J j=1 C j , there also exists a policy rendering the dynamical system safe up to time T whenever starting from any state in B LD := J j=1 B LDj . Proof. A limited-duration safe policy for C j also keeps the agent inside the set C up to time T when starting from inside B LDj . If there exist LDCBFs for C j s, it is natural to ask if there exists an LDCBF for C. Because of the nonsmoothness stemming from Boolean compositions, however, obtaining an LDCBF for C requires an additional learning in general (see Appendix F). Also, existence of an LDCBF for the intersection of multiple sets of safe states is not guaranteed, and we need an additional learning as well. So far, we have seen a possible way to obtain an LDCBF for a given set of safe states expressed as in (7). As our approach is set-theoretic rather than specifying a single optimal policy, it is also compatible with the constraints-driven control and transfer learning as described in the next section. V. APPLICATIONS In this section, we present two practical examples that illustrate benefits of considering LDCBFs, namely, long-duration autonomy and transfer learning. A. Applications to Long-duration Autonomy In many applications, guaranteeing particular properties (e.g., forward invariance) over an infinite-time horizon is difficult or some forms of approximations are required. Specifically, when a function approximator is employed, there will certainly be an approximation error. Nevertheless, it is often sufficient to guarantee safety up to certain finite time, and our proposed LDCBFs act as useful relaxations of CBFs. To see that one can still achieve long-duration autonomy by using LDCBFs, we consider the settings of work in [27]. Suppose that the state x := [E, p T ] T ∈ R 3 has the information of energy level E ∈ R ≥0 and the position p ∈ R 2 of an agent. Suppose also that E max > 0 is the maximum energy level and ρ(p) ≥ 0 (equality holds only when the agent is at a charging station) is the energy required to bring the agent to a charging station from the position p ∈ R 2 . Then, although we emphasize that we can obtain an LDCBF by value function learning if necessary, let us assume that the function B LD (x) := E max − E + ρ(p) ≥ 0 is an LDCBF, for simplicity. Then, by letting L = β(E max − E min ) for some β > 0 and for the minimum necessary energy level E min , 0 ≤ E min < E max , the set of safe states can be given by C := x ∈ X : B LD (x) < L β . Now, under these settings, the following proposition holds. Proposition V.1. Assume that the energy dynamics is lower bounded by dE dt ≥ −K d , ∃K d > 0, which implies that the least exit timeT energy (E) of E being below E min iŝ T energy (E) = (E − E min ) K d . Also, suppose we employ a locally Lipschitz continuous policy φ that satisfies the LDCBF condition using a battery dynamics model dÊ dt = −K d . Then, by taking T >T energy (E 0 ) for the initial energy level E 0 > E min and letting B T LD := x ∈ X : B LD (x) ≤ Le −βT β ⊂ C, the agent starting from a state in B T LD will reach the charging station before the energy reaches to E min . Proof. See Appendix D. Hence, LDCBFs are shown to be applicable to some cases where it is difficult to guarantee that certain properties hold over infinite horizons, but where limited-duration safety suffices. One of the benefits of using LDCBFs is that, once a set of limited-duration safe policies or good enough policies is obtained, one can reuse them for different tasks. Therefore, given that we can obtain an LDCBF through global value function learning for a policy that is not necessarily stabilizing the system, it is natural to ask if one can employ LDCBFs to transfer knowledge. Indeed, LDCBFs also have good compatibility with transfer learning (or safe transfer) as discussed below. B. Applications to Transfer Learning Given the immediate cost function, reinforcement learning aims at finding an optimal policy. Clearly, the obtained policy is not optimal with respect to a different immediate cost function in general. Therefore, employing the obtained policy straightforwardly in different tasks makes no sense. However, it is quite likely that the obtained policy is good enough even in different tasks because some mandatory constraints such as avoiding unsafe regions of the state space are usually shared among different tasks. Therefore we wish to exploit constraint learning for the sake of transfer learning, i.e., we learn constraints which are common to the target task while learning source tasks. Definition V.1 (Transfer learning, [12, modified version of Definition 1]). Given a set of training data D S for one task (i.e., source task) denoted by T S (e.g., an MDP) and a set of training data D T for another task (i.e., target task) denoted by T T , transfer learning aims to improve the learning of the target predictive function f T (i.e., a policy in our example) in D T using the knowledge in D S and T S , where D S = D T , or T S = T T . For example, when learning an optimal policy for the balance task of the cart-pole problem, one can simultaneously learn a set of limited-duration safe policies that keep the pole from falling down up to certain time T > 0. The set of these limited-duration safe policies is obviously useful for other tasks such as moving the cart to one direction without letting the pole fall down. Here, we present a possible application of limited duration control barrier functions to transfer learning. We take the following steps: 1) Design J ∈ Z + cost functions j , j ∈ {1, 2, . . . , J}, each of which represents a constraint by defining a set of safe state C j := {x ∈ X : j (x) < L} , L > 0, j ∈ {1, 2, . . . , J}. 2) Conduct reinforcement learning for a cost function by using any of the reinforcement learning techniques. 3) No matter the currently obtained policy φ is optimal or not, one can obtain an LDCBF B LDj for each set C j , j ∈ {1, 2, . . . , J}. More specifically, B LDj is given by B LDj (x) := V φ,β j (x) := ∞ 0 e −βt j (x(t))dt, j ∈ {1, 2, . . . , J}, for x(0) = x. 4) When learning a new task, policies are constrained by LDCBFs depending on which constraints are common to the new task. We study some practical implementations. Given LDCBFs B LDj , j ∈ {1, 2, ..., J}, define the set Φ T j of admissible policies as Φ T j := {φ :φ(x) ∈ S T LDj (x), ∀x ∈ C j } ⊂ Φ := {φ : φ(x) ∈ U, ∀x ∈ X }, where S T LDj (x) is the set of admissible control inputs at x for the jth constraint. If an optimal policy φ T T for the target task T T is included in Φ T j , one can conduct learning for the target task within the policy space Φ T j . If not, one can still consider Φ T j as a soft constraint and can explore the policy space Φ \ Φ T j with a given probability or can just select the initial policy from Φ T j . In practice, a parametrized policy is usually considered; a policy φ θ expressed by a parameter θ is updated via policy gradient methods [50]. If the policy is in the linear form with a fixed feature vector, the projected policy gradient method [51] can be used. Thanks to the fact that an LDCBF defines an affine constraint on instantaneous control inputs if the system dynamics is affine in control, the projected policy gradient method looks like θ ← Γ j [θ + λ∇ θ F T T (θ)]. Here, Γ j : Φ → Φ T j projects a policy onto the affine constraint defined by the jth constraint and F T T (θ) is the objective function for the target task which is to be maximized. For the policy not in the linear form, one may update policies based on LDCBFs by modifying the deep deterministic policy gradient (DDPG) method [52]: because through LDCBFs, the global property (i.e., limited-duration safety) is ensured by constraining local control inputs, it suffices to add penalty terms to the cost when updating a policy using samples. For example, one may employ the log-barrier extension proposed in [53], which is a smooth approximation of the hard indicator function for inequality constraints but is not restricted to feasible points. VI. EXPERIMENTS In this section, we validate our learning framework. First, we show that LDCBFs indeed work for the constraints-driven control problem considered in Section V-A by simulation. Then, we apply LDCBFs to a transfer learning problem for the cart-pole simulation environment. A. Constraints-driven coverage control of multi-agent systems Let the parameters be E max = 1.0, E min = 0.55, K d = 0.01, β = 0.005 and T = 50.0 > 45.0 = (E max − E min )/K d . We consider six agents (robots) with single integrator dynamics. An agent of the position p i := [x i , y i ] T is assigned a charging station of the position p charge,i , where x and y are the X position and the Y position, respectively. When the agent is close to the station (i.e., p i − p charge,i R 2 ≤ 0.05), it remains there until the battery is charged to E ch = 0.92. Actual battery dynamics is given by dE/dt = −0.01E. The coverage control task is encoded as Lloyd's algorithm [54] aiming at converging to the Centroidal Voronoi Tesselation, but with a soft margin so that the agent prioritizes the safety constraint. The locational cost used for the coverage control task is given as follows [55]: MATLAB simulation (the simulator is provided on the Robotarium [56] website: www.robotarium.org), we used the random seed rng(5) for determining the initial states. Note, for every agent, the energy level and the position are set so that it starts from inside the set B T LD . Figure VI.1 shows (a) the images of six agents executing coverage tasks and (b) images of the agents three of which are charging their batteries. Figure VI-B shows the simulated battery voltage data of the six agents, from which we can observe that LDCBFs worked effectively for the swarm of agents to avoid depleting their batteries. 6 i=1 Vi(p) p i −p 2 ϕ(p)dp, where V i (p) = {p ∈ X : p i −p ≤ p j −p , ∀j = i} is the Voronoi cell for the agent i. In particular, we used ϕ([x,ŷ] T ) = e −{(x−0.2) 2 +(ŷ−0.3) 2 }/0.06 + 0.5e −{(x+0.2) 2 +(ŷ+0.1) 2 }/0.03 . In B. Transfer from Balance to Move: Cart-pole problem Next, we apply LDCBFs to transfer learning. The simulation environment and the deep learning framework used in this experiment are "Cart-pole" in DeepMind Control Suite and PyTorch [57], respectively. We take the following steps: 1) Learn a policy that successfully balances the pole by using DDPG [52]. 2) Learn an LDCBF by using the obtained actor network. 3) Try a random policy with the learned LDCBF and a (locally) accurate model to see that LDCBF works reasonably. 4) With and without the learned LDCBF, learn a policy that moves the cart to left without letting the pole fall down, which we refer to as move-the-pole task. The parameters used for this experiment are summarized in Table VI.1. Here, angle threshold stands for the threshold of cos ψ where ψ is the angle of the pole from the standing position, and position threshold is the threshold of the cart position p. The angle threshold and the position threshold are used to terminate an episode. Note that the cart-pole environment of MuJoCo [58] xml data in DeepMind Control Suite is modified so that the cart can move between −3.8 and 3.8. As presented in Example IV.1, we use prioritized experience replay when learning an LDCBF. Specifically, we store the positive and the negative data, and sample 4 data points from the positive one and the remaining 60 data points from the negative one. In this experiment, actor, critic and LDCBF networks use ReLU nonlinearities. The actor network and the LDCBF network consist of two layers of 300 → 200 units, and the critic network is of two layers of 400 → 300 units. The control input vector is concatenated to the state vector from the second critic layer. Step1: The average duration (i.e., the first exit time, namely, the time when the pole first falls down) out of 10 seconds (corresponding to 1000 time steps) over 10 trials for the policy learned through the balance task by DDPG was 10 seconds. Step2: Then, by using this successfully learned policy, an LDCBF is learned by assigning the cost (x) = 1.0 for cos ψ < 0.2 and (x) = 0.1 elsewhere. Also, because the LDCBF is learned in a discrete-time form, we transform it to a continuous-time form via multiplying it by ∆ t = 0.01. When learning an LDCBF, we initialize each episode as follows: the angle ψ is uniformly sampled within −1.5 ≤ ψ ≤ 1.5, the cart velocityṗ is multiplied by 100 and the angular velocityψ is multiplied by 200 after being initialized by DeepMind Control Suite. The LDCBF learned by using this policy is illustrated in Figure VI Step3: To test this LDCBF, we use a uniformly random policy (φ(x) takes the value between −1 and 1) constrained by the LDCBF with the function α(q) = max {0.1q, 0} and with the time constant T = 5.0. When imposing constraints, we use the (locally accurate) control-affine model of the cart-pole in the work [59], where we replace the friction parameters by zeros for simplicity. The average duration out of 10 seconds over 10 trials for this random policy was 10 seconds, which indicates that the LDCBF worked sufficiently well. We also tried this LDCBF with the function α(q) = max {3.0q, 0} and T = 5.0, which resulted in the average duration of 5.58 seconds. Moreover, we tried the fixed policy φ(x) = 1.0, ∀x ∈ X , with the function α(q) = max {0.1q, 0} and T = 5.0, and the average duration was 4.73 seconds, which was sufficiently close to T = 5.0. Step4: For the move-the-pole task, we define the success by the situation where the cart position p, −3.8 ≤ p ≤ 3.8, ends up in the region of p ≤ −1.8 without letting the pole fall down. The angle ψ is uniformly sampled within −0.5 ≤ ψ ≤ 0.5 and the rest follow the initialization of DeepMind Control Suite. The reward is given by (1 + cos ψ)/2×(utils.rewards.tolerance(ṗ + 1.0, bounds = (−2.0, 0.0), margin = 0.5)), where utils.rewards.tolerance is the function defined in [13]. In other words, to move the pole to left, we give high rewards when the cart velocity is negative and the pole is standing up. To use the learned LDCBF for DDPG, we store matrices and vectors used in linear constraints along with other variables such as control inputs and states, which are then used for experience replay. Then, the logbarrier extension cost proposed in [53] is added when updating policies. Also, we try DDPG without using LDCBF for the move-the-pole task. Both approaches initialize the policy by the one obtained after the balance task. The average success rates of the policies obtained after the numbers of episodes up to 15 over 10 trials are given in Table VI.2 for DDPG with the learned LDCBF and DDPG without LDCBF. This result implies that our proposed approach successfully transferred information from the source task to the target task by sharing a common safety constraint. VII. CONCLUSION In this paper, we presented a notion of limited-duration safety as a relaxation of forward-invariance of a set of safe states. Then, we proposed limited-duration control barrier functions that are used to guarantee limited-duration safety by using locally accurate model of agent dynamics. We showed that LDCBFs can be obtained through global value function learning, and analyzed some of their properties. LDCBFs were validated through persistent coverage control tasks and were successfully applied to a transfer learning problem via sharing a common state constraint. APPENDIX A PROOF OF THEOREM IV.1 Under Assumption III.1, the trajectories x(t) with an initial condition x(0) ∈ B T LD exist and are unique over t ≥ 0. Let the first time at which the trajectory x(t) exits C be T e > 0 and let T p , 0 ≤ T p < T e , denote the last time at which the trajectory x(t) passes through the boundary of B T LD from inside before first exiting C. Since α is locally Lipschitz continuous and B LD is continuously differentiable, the right hand side of (4) is locally Lipschitz continuous. Thus solutions to the differential equatioṅ r(t) = α Le −βT β − r(t) + βr(t), where the initial condition is given by r(T p ) = B LD (x(T p )), exist and are unique for all t, T p ≤ t ≤ T e . On the other hand, the solution tȯ s(t) = βs(t), where the initial condition is given by s( T p ) = B LD (x(T p )) = Le −βT β , is s(t) = B LD (x(T p ))e β(t−Tp) , ∀t ≥ T p . It thus follows that s(T p + T ) = L β e −βT e βT = L β , and T p +T is the first time at which the trajectory s(t), t ≥ T p , exits C. Because α( Le −βT β − r(t)) ≤ 0, T p ≤ t ≤ T e , we obtain, by the Comparison Lemma [10], [60,Theorem 1.10.2], B LD (x(t)) ≤ r(t) ≤ s(t) for all t, T p ≤ t ≤ T e . If we assume T e < T p +T , it contradicts the fact that B LD (x(T e )) ≤ s(T e ) < s(T p + T ) = L β , and hence T e ≥ T p + T . Therefore, we conclude that any Lipschitz continuous policy φ : X → U such that φ(x) ∈ S T LD (x), ∀x ∈ C, renders the dynamical system safe up to time T p + T (≥ T ) whenever the initial state x(0) is in B T LD . APPENDIX B ON PROPOSITION IV.1 The width of a feasible set is defined as the unique solution to the following linear program: u ω (x) = max [u T ,ω] T ∈R nu+1 ω (B.1) s.t. L f B LD (x) + L g B LD (x)u + ω ≤ α Le −βT β − B LD (x) + βB LD (x) u + [ω, ω . . . , ω] T ∈ U APPENDIX C PROOF OF THEOREM IV.2 Because, by definition, V φ,β c (x) ≥L β , ∀x ∈ X \ C, it follows that C = x ∈ X :V φ,β c (x) <L β ⊂ C. Because the continuously differentiable functionV φ,β c (x) sat- isfiesV φ,β c (x) = L fV φ,β c (x) + L gV φ,β c (x)φ(x) = βV φ,β c (x) −ˆ c (x), ∀x ∈ C, andˆ c (x) ≥ 0, ∀x ∈ C, there exists at least one policy φ that satisfies φ(x) ∈ S T LD (x) = {u ∈ U : L fV φ,β c (x) + L gV φ,β c (x)u ≤ α L e −βT β −V φ,β c (x) + βV φ,β c (x)}, for all x ∈ C and for a monotonically increasing locally Lipschitz continuous function α such that α(q) = 0, ∀q ≤ 0. Therefore,V φ,β c is an LDCBF for the setĈ. Remark C.1. A sufficiently large constant c could be chosen in practice. Ifˆ c (x) > 0 for all x ∈ C and the value function is learned by using a policy φ such that φ(x)+[c φ , c φ . . . , c φ ] T ∈ U for some c φ > 0, then the unique solution to the linear program (B.1) satisfies u ω (x) > 0 for any x ∈ C. APPENDIX D PROOF OF PROPOSITION V.1 Under Assumption III.1, the trajectories x(t) with an initial condition x(0) ∈ B T LD satisfying the given LDCBF condition (that uses the battery dynamics model dÊ/dt = −K d ) exist and are unique over t ≥ 0. Let the actual exit time of E being below E min be T energy > 0, and let the first time at which the trajectory x(t) exits C be T e > 0 (T e = ∞ if the agent never exits C). Note it holds that T e ≤ T energy . Also, let E t and ρ t be the trajectories of E and ρ(p). Definê T e := min T e , inf {t : ρ t = 0 ∧ x(t) / ∈ B T LD } , and T p := max t : x(t) ∈ ∂B T LD ∧ t ≤T e , where ∂B T LD denotes the boundary of B T LD . Now, if E t = E min and ρ t > 0 for some t < T p , it contradicts the fact that B LD (x(t)) = E max − E t + ρ t < E max − E min (∵ x(t) ∈ C, ∀t < T p ) . Therefore, it follows that E t > E min or ρ t = 0 for all t < T p . This implies that we should only consider the time t ≥ T p by assuming that ρ Tp ≥ 0. LetÊ t be the trajectory following the virtual battery dynamics dÊ/dt = −K d andÊ Tp = E Tp , and let s(t) be the unique solution tȯ s(t) = βs(t), t ≥ T p , where s(T p ) = B LD (x(T p )) = (E max − E min )e −βT . Also, let (t) = s(t) +Ê t − E max , t ≥ T p . Then, the time at which s(t) becomes E max − E min is T p + T because s(T + T p ) = B LD (x(T p ))e β(T +Tp−Tp) = (E max − E min )e −βT e β(T +Tp−Tp) = E max − E min . Sincê T energy (E Tp ) ≤T energy (E 0 ) < T and (t) = B LD (x(T p ))e β(t−Tp) +Ê Tp − K d (t − T p ) − E max , we obtain T 0 := inf {t : (t) = 0 ∧ t ≥ T p } ≤ T p +T energy (E Tp ). On the other hand, the actual battery dynamics can be written as dE/dt = −K d +∆(x), where ∆(x) ≥ 0. Therefore, we see that the trajectory x(t) satisfies dB LD (x) dt ≤ βB LD (x) − ∆(x), ∀t, T p ≤ t ≤T e . Then, because d (B LD (x(t)) − s(t)) dt ≤ β (B LD (x(t)) − s(t)) − ∆(x(t)) ≤ β (B LD (x(t)) − s(t)) , ∀t, T p ≤ t ≤T e , and β (B LD (x(T p )) − s(T p )) = 0, we obtain B LD (x(t)) − s(t) ≤ − t 0 ∆(x(t))dt, ∀t, T p ≤ t ≤T e . Also, it is straightforward to see that T energy ≥ T e ≥ T p + T ≥ T p +T energy (E Tp ). From the definitions of B LD and (t), it follows that ρ t − (t) = B LD (x(T p )) − s(t) + E t −Ê t ≤ − t 0 ∆(x(t))dt + t 0 ∆(x(t))dt = 0, ∀t, T p ≤ t ≤T e , which leads to the inequality ρ t ≤ (t) for all t, T p ≤ t ≤T e . Hence, we conclude that T := inf {t : ρ t = 0 ∧ t ≥ T p } ≤T 0 ≤ T p +T energy (E Tp ) ≤ T energy , andT e =T , which proves the proposition. APPENDIX E STOCHASTIC LIMITED DURATION CONTROL BARRIER FUNCTIONS For the system dynamics described by a stochastic differential equation: dx = h(x(t), u(t))dt + η(x(t))dw, (E.1) where h : R nx × U → R nx is the drift, η : R nx → R nx×nw is the diffusion, w is a Brownian motion of dimension n w ∈ Z + , it is often impossible to guarantee safety with probability one without making specific assumptions on the dynamics. Therefore, we instead consider an upper bound on the probability that a trajectory escape from the set of safe states within a given finite time. We give the formal definition below. Definition E.1 (Limited-duration safety for stochastic systems). Let B T,δ SLD be a closed nonempty subset of a open set of safe states C S and τ the first exit time τ := inf{t : x(t) = C S }. Then, the stopped processx(t) defined bỹ x(t) := x(t), t < τ, x(τ ), t ≥ τ, (E.2) where x(t) evolves by (E.1), is safe up to time T > 0 with probability δ, 0 ≤ δ ≤ 1, if there exists a policy φ which ensures that P x(t) = C S for some t, 0 ≤ t ≤ T : x(0) ∈ B T,δ SLD ≤ 1 − δ. To present stochastic limited duration control barrier functions (SLDCBFs) that are stochastic counterparts of LDCBFs, we define the infinitesimal generator G, for a function B SLD : X → R ≥0 , by G(B SLD )(x) := − 1 2 tr ∂ 2 B SLD (x) ∂x 2 η(x)η(x) T − ∂B SLD (x) ∂x h(x, φ(x)), x ∈ int(X ), where tr stands for the trace. Also, we make the following assumption. Then, the following theorem holds. Theorem E.1. Given T > 0 and δ, 0 ≤ δ ≤ 1, define a set of safe states C S := x ∈ X : B SLD (x) < L β , L > 0, β > 0, for a twice continuously differentiable function B SLD : X → R ≥0 . Define also the set B T,δ SLD as B T,δ SLD := x ∈ X : B SLD (x) ≤ (1 − δ) Le −βT β ⊂ C S . If B T,δ SLD is nonempty and if there exists a Lipschitz continuous policy φ : X → U satisfying φ(x) ∈ S T SLD := {u ∈ U : −G(B SLD )(x) ≤ βB SLD (x)}, for all x ∈ C S , then, under Assumption E.1, the policy φ renders the stopped processx(t) in (E.2) safe up to time T with probability δ. Proof. DefineB SLD : R ≥0 → R ≥0 as B SLD (t) := e −βt B SLD (x(t)) . Becausex(t) is an E x B SLD (x(t)) −B SLD (x(0)) = E x    t 0 −e −βt [G(B SLD ) + βB SLD ] (x(s)) ≤0 ds    for any t, 0 ≤ t < ∞, from which it follows that E x B SLD (x(t)) ≤B SLD (x(0)). Therefore,B SLD is a supermartingale with respect to the filtration {M t : t ≥ 0} generated byx(t) because B SLD (x(t)) is twice continuously differentiable and C S is bounded. We thus obtain [62, p.25] P sup 0 ≤ t ≤ T B SLD (x(t)) ≥ L β ≤ P sup 0 ≤ t ≤ TB SLD (t) ≥ Le −βT β ≤ βe βTB SLD (0) L = βe βT B SLD (x(0)) L ≤ 1 − δ. To make a claim similar to Theorem IV.2, we consider the following value function associated with the policy φ: V φ,β (x) := E x ∞ 0 e −βt (x(t))dt, β ≥ 0, where (x(t)) is the immediate cost and E x is the expectation for all trajectories (time evolutions of x(t)) starting from x = x(0). When V φ,β is twice continuously differentiable over int(X ), we obtain the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation [44]: βV φ,β (x) = −G(V φ,β )(x) + (x), x ∈ int(X ). (E.3) Given the set of safe states C S := {x ∈ X : (x) < L} , L > 0, for (x) ≥ 0, ∀x ∈ X , and the stopped process (E.2), assume that we employ a twice continuously differentiable function approximator to approximate the value function V φ,β for the stopped process, and letV φ,β denote the approximation of V φ,β . By using the HJBI equation (E.3), define the estimated immediate cost functionˆ aŝ (x) = βV φ,β (x) + G(V φ,β )(x), ∀x ∈ C S . Select c ≥ 0 so thatˆ c (x) :=ˆ (x) + c ≥ 0 for all x ∈ C S , and define the functionV φ,β c (x) :=V φ,β (x) + c β . Now, the following theorem holds. The function B LD1 ∨B LD2 is, however, nonsmooth in general. Therefore, even if we consider differential inclusion and associated Carathéodory solutions as in [23,63], there might exist sliding modes that violate inequalities imposed by a function B LD1 ∨B LD2 at x ∈ Ω f g+φ . Here, Ω f +gφ represents the zeromeasure set where the dynamics is nondifferentiable (see [64] for detailed arguments, for example). Nevertheless, to obtain a smooth LDCBF for C, we may obtain a smooth approximation of a possibly discontinuous policy φ that satisfies dB LDj (x * ) dt ≤ α Le −βT β − B LDj (x * ) + βB LDj (x * ), for all x * such that B LD1 (x * ) < B LD2 (x * ) for j = 1 and B LD2 (x * ) < B LD1 (x * ) for j = 2. Then, we can conduct value function learning to obtain an LDCBF for C with a set B T LD that is possibly smaller than B T LD .
7,890
1908.08704
2969244993
We propose a self-supervised learning framework for visual odometry (VO) that incorporates correlation of consecutive frames and takes advantage of adversarial learning. Previous methods tackle self-supervised VO as a local structure from motion (SfM) problem that recovers depth from single image and relative poses from image pairs by minimizing photometric loss between warped and captured images. As single-view depth estimation is an ill-posed problem, and photometric loss is incapable of discriminating distortion artifacts of warped images, the estimated depth is vague and pose is inaccurate. In contrast to previous methods, our framework learns a compact representation of frame-to-frame correlation, which is updated by incorporating sequential information. The updated representation is used for depth estimation. Besides, we tackle VO as a self-supervised image generation task and take advantage of Generative Adversarial Networks (GAN). The generator learns to estimate depth and pose to generate a warped target image. The discriminator evaluates the quality of generated image with high-level structural perception that overcomes the problem of pixel-wise loss in previous methods. Experiments on KITTI and Cityscapes datasets show that our method obtains more accurate depth with details preserved and predicted pose outperforms state-of-the-art self-supervised methods significantly.
Humans are capable of perceiving 3D environment and inferring ego-motion in a short time, but it is hard for an agent to be equipped with similar capabilities. VO SLAM has been considered as a multi-view geometric problem for decades. It is traditionally solved by minimizing photometric @cite_13 or geometric @cite_2 reprojection errors and works well in regular environments, but fails in challenging conditions like dynamic objects and abrupt motions. In light of these limitations, VO has been studied with learning techniques in recent years and many approaches with promising performance have been proposed.
{ "abstract": [ "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.", "This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public." ], "cite_N": [ "@cite_13", "@cite_2" ], "mid": [ "612478963", "1612997784" ] }
0
1908.08704
2969244993
We propose a self-supervised learning framework for visual odometry (VO) that incorporates correlation of consecutive frames and takes advantage of adversarial learning. Previous methods tackle self-supervised VO as a local structure from motion (SfM) problem that recovers depth from single image and relative poses from image pairs by minimizing photometric loss between warped and captured images. As single-view depth estimation is an ill-posed problem, and photometric loss is incapable of discriminating distortion artifacts of warped images, the estimated depth is vague and pose is inaccurate. In contrast to previous methods, our framework learns a compact representation of frame-to-frame correlation, which is updated by incorporating sequential information. The updated representation is used for depth estimation. Besides, we tackle VO as a self-supervised image generation task and take advantage of Generative Adversarial Networks (GAN). The generator learns to estimate depth and pose to generate a warped target image. The discriminator evaluates the quality of generated image with high-level structural perception that overcomes the problem of pixel-wise loss in previous methods. Experiments on KITTI and Cityscapes datasets show that our method obtains more accurate depth with details preserved and predicted pose outperforms state-of-the-art self-supervised methods significantly.
Supervised methods formulate VO as a supervised learning problem and many methods with good results have been proposed. DeMoN @cite_39 jointly estimates pose and depth in an end-to-end manner. Inspired by the practice of parallel tracking and mapping in classic VO SLAM, DeepTAM @cite_29 utilizes two networks for pose and depth estimation. DeepVO @cite_19 treats VO as a sequence-to-sequence learning problem by estimating poses recurrently. The limitation of supervised learning is that it requires a large amount of labeled data. The acquisition of ground truth often requires expensive equipment or highly manual labeling, and some gathered data are inaccurate. Depth obtained by LIDAR is sparse, and the output depth of Kinect contains a lot of noise. Furthermore, some ground truth is unable to obtain ( optical flow). Previous works have tried to address these problems with synthetic datasets @cite_30 , but there is always a gap between synthetic and real-world data.
{ "abstract": [ "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.", "This paper studies monocular visual odometry (VO) problem. Most of existing VO algorithms are developed under a standard pipeline including feature extraction, feature matching, motion estimation, local optimisation, etc. Although some of them have demonstrated superior performance, they usually need to be carefully designed and specifically fine-tuned to work well in different environments. Some prior knowledge is also required to recover an absolute scale for monocular VO. This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs). Since it is trained and deployed in an end-to-end manner, it infers poses directly from a sequence of raw RGB images (videos) without adopting any module in the conventional VO pipeline. Based on the RCNNs, it not only automatically learns effective feature representation for the VO problem through Convolutional Neural Networks, but also implicitly models sequential dynamics and relations using deep Recurrent Neural Networks. Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems.", "We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6 DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classic and deep learning powered dense depth algorithms.", "In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training." ], "cite_N": [ "@cite_30", "@cite_19", "@cite_29", "@cite_39" ], "mid": [ "2951309005", "2598706937", "2887825894", "2561074213" ] }
0
1908.08704
2969244993
We propose a self-supervised learning framework for visual odometry (VO) that incorporates correlation of consecutive frames and takes advantage of adversarial learning. Previous methods tackle self-supervised VO as a local structure from motion (SfM) problem that recovers depth from single image and relative poses from image pairs by minimizing photometric loss between warped and captured images. As single-view depth estimation is an ill-posed problem, and photometric loss is incapable of discriminating distortion artifacts of warped images, the estimated depth is vague and pose is inaccurate. In contrast to previous methods, our framework learns a compact representation of frame-to-frame correlation, which is updated by incorporating sequential information. The updated representation is used for depth estimation. Besides, we tackle VO as a self-supervised image generation task and take advantage of Generative Adversarial Networks (GAN). The generator learns to estimate depth and pose to generate a warped target image. The discriminator evaluates the quality of generated image with high-level structural perception that overcomes the problem of pixel-wise loss in previous methods. Experiments on KITTI and Cityscapes datasets show that our method obtains more accurate depth with details preserved and predicted pose outperforms state-of-the-art self-supervised methods significantly.
Self-supervised methods In order to alleviate the reliance on ground truth, recently many self-supervised methods have been proposed for VO. The key to self-supervised learning is to find the internal correlations and constraints in the training data. SfMLearner @cite_20 leverages the geometric correlation of depth and pose to learn both of them in a coupled way, with a learned mask to mask out regions that don't meet static scene assumption. As the first self-supervised approach for VO, SfMLearner couples depth and pose estimations with image warping, which becomes the problem of minimizing photometric loss. Inherited from this idea, many self-supervised VO have been proposed, including modifications on loss functions @cite_15 @cite_16 , network architectures @cite_5 @cite_0 @cite_15 @cite_4 @cite_25 , predicted contents @cite_22 , and combination with classic VO SLAM @cite_32 @cite_38 . For example, GeoNet @cite_22 extends the framework to jointly estimate optical flow with forward-backward consistency to infer unstable regions and achieves state-of-the-art performance among self-supervised VO methods.
{ "abstract": [ "Monocular visual odometry approaches that purely rely on geometric cues are prone to scale drift and require sufficient motion parallax in successive frames for motion estimation and 3D reconstruction. In this paper, we propose to leverage deep monocular depth prediction to overcome limitations of geometry-based monocular visual odometry. To this end, we incorporate deep depth predictions into Direct Sparse Odometry (DSO) as direct virtual stereo measurements. For depth prediction, we design a novel deep network that refines predicted depth from a single image in a two-stage process. We train our network in a semi-supervised way on photoconsistency in stereo images and on consistency with accurate sparse depth reconstructions from Stereo DSO. Our deep predictions excel state-of-the-art approaches for monocular depth on the KITTI benchmark. Moreover, our Deep Virtual Stereo Odometry clearly exceeds previous monocular and deep-learning based methods in accuracy. It even achieves comparable performance to the state-of-the-art stereo methods, while only relying on a single camera.", "We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the scene, enforcing consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.", "We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and ego-motion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones.", "We present a self-supervised approach to ignoring \"distractors\" in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments. We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each input image, which we use to train a deep convolutional network. At run-time we use the predicted ephemerality and depth as an input to a monocular visual odometry (VO) pipeline, using either sparse features or dense photometric matching. Our approach yields metric-scale VO using only a single camera and can recover the correct egomotion even when 90 of the image is obscured by dynamic, independently moving objects. We evaluate our robust VO methods on more than 400km of driving from the Oxford RobotCar Dataset and demonstrate reduced odometry drift and significantly improved egomotion estimation in the presence of large moving vehicles in urban traffic.", "This paper presents a deep network based unsupervised visual odometry system for 6-DoF camera pose estimation and finding dense depth map for its monocular view. The proposed network is trained using unlabeled binocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. This is achieved by introducing a novel objective function and training the network using temporally alligned sequences of monocular images. The objective function is based on the Charbonnier penalty applied to spatial and bi-directional temporal reconstruction losses. The overall novelty of the approach lies in the fact that the proposed deep framework combines a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6-DoF camera pose and superior depth map. According to our knowledge, such a framework with complete unsupervised end-to-end learning has not been tried so far, making it a novel contribution in the field. The effectiveness of the approach is demonstrated through performance comparison with the state-of-the-art methods on KITTI driving dataset.", "This paper presents an unsupervised deep learning framework called UnDEMoN for estimating dense depth map and 6-DoF camera pose information directly from monocular images. The proposed network is trained using unlabeled monocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. These improvements are achieved by introducing a new objective function that aims to minimize spatial as well as temporal reconstruction losses simultaneously. These losses are defined using bi-linear sampling kernel and penalized using the Charbonnier penalty function. The objective function, thus created, provides robustness to image gradient noises thereby improving the overall estimation accuracy without resorting to any coarse to fine strategies which are currently prevalent in the literature. Another novelty lies in the fact that we combine a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6 DOF Camera pose and superior depth map. The effectiveness of the proposed approach is demonstrated through performance comparison with the existing supervised and unsupervised methods on the KITTI driving dataset.", "With the success of deep learning based approaches in tackling challenging problems in computer vision, a wide range of deep architectures have recently been proposed for the task of visual odometry (VO) estimation. Most of these proposed solutions rely on supervision, which requires the acquisition of precise ground-truth camera pose information, collected using expensive motion capture systems or high-precision IMU GPS sensor rigs. In this work, we propose an unsupervised paradigm for deep visual odometry learning. We show that using a noisy teacher, which could be a standard VO pipeline, and by designing a loss term that enforces geometric consistency of the trajectory, we can train accurate deep models for VO that do not require ground-truth labels. We leverage geometry as a self-supervisory signal and propose \"Composite Transformation Constraints (CTCs)\", that automatically generate supervisory signals for training and enforce geometric consistency in the VO estimate. We also present a method of characterizing the uncertainty in VO estimates thus obtained. To evaluate our VO pipeline, we present exhaustive ablation studies that demonstrate the efficacy of end-to-end, self-supervised methodologies to train deep models for monocular VO. We show that leveraging concepts from geometry and incorporating them into the training of a recurrent neural network results in performance competitive to supervised deep VO methods.", "We propose a novel monocular visual odometry (VO) system called UnDeepVO in this paper. UnDeepVO is able to estimate the 6-DoF pose of a monocular camera and the depth of its view by using deep neural networks. There are two salient features of the proposed UnDeepVo:one is the unsupervised deep learning scheme, and the other is the absolute scale recovery. Specifically, we train UnDeepVoby using stereo image pairs to recover the scale but test it by using consecutive monocular images. Thus, UnDeepVO is a monocular system. The loss function defined for training the networks is based on spatial and temporal dense information. A system overview is shown in Fig. 1. The experiments on KITTI dataset show our UnDeepVO achieves good performance in terms of pose accuracy.", "Despite learning based methods showing promising results in single view depth estimation and visual odometry, most existing approaches treat the tasks in a supervised manner. Recent approaches to single view depth estimation explore the possibility of learning without full supervision via minimizing photometric error. In this paper, we explore the use of stereo sequences for learning depth and visual odometry. The use of stereo sequences enables the use of both spatial (between left-right pairs) and temporal (forward backward) photometric warp error, and constrains the scene depth and camera motion to be in a common, real-world scale. At test time our framework is able to estimate single view depth and two-view odometry from a monocular sequence. We also show how we can improve on a standard photometric warp loss by considering a warp of deep features. We show through extensive experiments that: (i) jointly training for single view depth and visual odometry improves depth prediction because of the additional constraint imposed on depths and achieves competitive results for visual odometry; (ii) deep feature-based warping loss improves upon simple photometric warp loss for both single view depth estimation and visual odometry. Our method outperforms existing learning based methods on the KITTI driving dataset in both tasks. The source code is available at this https URL", "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings." ], "cite_N": [ "@cite_38", "@cite_4", "@cite_22", "@cite_32", "@cite_0", "@cite_5", "@cite_15", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "2830339951", "2785512290", "2794387644", "2769967426", "2909119029", "2892197942", "2964206229", "2964314455", "2794337790", "2609883120" ] }
0
1908.08704
2969244993
We propose a self-supervised learning framework for visual odometry (VO) that incorporates correlation of consecutive frames and takes advantage of adversarial learning. Previous methods tackle self-supervised VO as a local structure from motion (SfM) problem that recovers depth from single image and relative poses from image pairs by minimizing photometric loss between warped and captured images. As single-view depth estimation is an ill-posed problem, and photometric loss is incapable of discriminating distortion artifacts of warped images, the estimated depth is vague and pose is inaccurate. In contrast to previous methods, our framework learns a compact representation of frame-to-frame correlation, which is updated by incorporating sequential information. The updated representation is used for depth estimation. Besides, we tackle VO as a self-supervised image generation task and take advantage of Generative Adversarial Networks (GAN). The generator learns to estimate depth and pose to generate a warped target image. The discriminator evaluates the quality of generated image with high-level structural perception that overcomes the problem of pixel-wise loss in previous methods. Experiments on KITTI and Cityscapes datasets show that our method obtains more accurate depth with details preserved and predicted pose outperforms state-of-the-art self-supervised methods significantly.
Despite its feasibility, self-supervised VO still underperforms supervised ones. Apart from the effectiveness of direct supervision, a key reason is that they focus mainly on geometric properties @cite_20 but pay little attention to the sequential nature of the problem. In these methods, only a few frames (no more than 5) are processed in the network, while previous estimations are discarded and the current estimation is made from scratch. Instead, the performance can be enhanced by taking geometric relations of sequential observations into account.
{ "abstract": [ "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings." ], "cite_N": [ "@cite_20" ], "mid": [ "2609883120" ] }
0
1908.08326
2969493583
In this paper, we study the problem of short sentence ranking for question answering. In order to get best score for all the sentences when given a query. We compute the representation for all the sentences in advance and leverage k-d tree to accelerate the speed. The experimental results shows that our methods beat the strong baseline of BM25 on large information retrieval corpus. We will compare our experiment results to other representation-based neural rankers in the future. And we will do the experiment of speed comparison between BM25-based and our tree-based retrieval approach.
In recent years, neural information retrieval and neural question answering research has developed several effective ways to improve ranking accuracy. Interaction-based neural rankers match query and document pair using attention-based deep model; representation-based neural rankers output sentence representations and using cosine distance to score the sentence pairs. There are many effective representation-based model include DSSM @cite_0 , CLSM @cite_10 and LSTM-RNN @cite_4 and many effective interaction-based model include DRMM @cite_8 Match-SRNN @cite_15 and BERT @cite_12 . Our deep model belongs to the representation-based models which could output the final semantic representation vector for each sentence.
{ "abstract": [ "This paper develops a model that addresses sentence embedding, a hot topic in current natural language processing research, using recurrent neural networks (RNN) with Long Short-Term Memory (LSTM) cells. The proposed LSTM-RNN model sequentially takes each word in a sentence, extracts its information, and embeds it into a semantic vector. Due to its ability to capture long term memory, the LSTM-RNN accumulates increasingly richer information as it goes through the sentence, and when it reaches the last word, the hidden layer of the network provides a semantic representation of the whole sentence. In this paper, the LSTM-RNN is trained in a weakly supervised manner on user click-through data logged by a commercial web search engine. Visualization and analysis are performed to understand how the embedding process works. The model is found to automatically attenuate the unimportant words and detect the salient keywords in the sentence. Furthermore, these detected keywords are found to automatically activate different cells of the LSTM-RNN, where words belonging to a similar topic activate the same cell. As a semantic representation of the sentence, the embedding vector can be used in many different applications. These automatic keyword detection and topic allocation abilities enabled by the LSTM-RNN allow the network to perform document retrieval, a difficult language processing task, where the similarity between the query and documents can be measured by the distance between their corresponding sentence embedding vectors computed by the LSTM-RNN. On a web search task, the LSTM-RNN embedding is shown to significantly outperform several existing state of the art methods. We emphasize that the proposed model generates sentence embedding vectors that are specially useful for web document retrieval tasks. A comparison with a well known general sentence embedding method, the Paragraph Vector, is performed. The results show that the proposed method in this paper significantly outperforms Paragraph Vector method for web document retrieval task.", "In recent years, deep neural networks have led to exciting breakthroughs in speech recognition, computer vision, and natural language processing (NLP) tasks. However, there have been few positive results of deep models on ad-hoc retrieval tasks. This is partially due to the fact that many important characteristics of the ad-hoc retrieval task have not been well addressed in deep models yet. Typically, the ad-hoc retrieval task is formalized as a matching problem between two pieces of text in existing work using deep models, and treated equivalent to many NLP tasks such as paraphrase identification, question answering and automatic conversation. However, we argue that the ad-hoc retrieval task is mainly about relevance matching while most NLP matching tasks concern semantic matching, and there are some fundamental differences between these two matching tasks. Successful relevance matching requires proper handling of the exact matching signals, query term importance, and diverse matching requirements. In this paper, we propose a novel deep relevance matching model (DRMM) for ad-hoc retrieval. Specifically, our model employs a joint deep architecture at the query term level for relevance matching. By using matching histogram mapping, a feed forward matching network, and a term gating network, we can effectively deal with the three relevance matching factors mentioned above. Experimental results on two representative benchmark collections show that our model can significantly outperform some well-known retrieval models as well as state-of-the-art deep matching models.", "Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.", "Semantic matching, which aims to determine the matching degree between two texts, is a fundamental problem for many NLP applications. Recently, deep learning approach has been applied to this problem and significant improvements have been achieved. In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i.e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position. Based on this idea, we propose a novel deep architecture, namely Match-SRNN, to model the recursive matching structure. Firstly, a tensor is constructed to capture the word level interactions. Then a spatial RNN is applied to integrate the local interactions recursively, with importance determined by four types of gates. Finally, the matching score is calculated based on the global interaction. We show that, after degenerated to the exact matching scenario, Match-SRNN can approximate the dynamic programming process of longest common subsequence. Thus, there exists a clear interpretation for Match-SRNN. Our experiments on two semantic matching tasks showed the effectiveness of Match-SRNN, and its ability of visualizing the learned matching structure.", "In this paper, we propose a new latent semantic model that incorporates a convolutional-pooling structure over word sequences to learn low-dimensional, semantic vector representations for search queries and Web documents. In order to capture the rich contextual structures in a query or a document, we start with each word within a temporal context window in a word sequence to directly capture contextual features at the word n-gram level. Next, the salient word n-gram features in the word sequence are discovered by the model and are then aggregated to form a sentence-level feature vector. Finally, a non-linear transformation is applied to extract high-level semantic information to generate a continuous vector representation for the full text string. The proposed convolutional latent semantic model (CLSM) is trained on clickthrough data and is evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that the proposed model effectively captures salient semantic information in queries and documents for the task while significantly outperforming previous state-of-the-art semantic models.", "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)." ], "cite_N": [ "@cite_4", "@cite_8", "@cite_0", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2142920810", "2536015822", "2136189984", "2338325072", "2131876387", "2896457183" ] }
Revisiting Semantic Representation and Tree Search for Similar Question Retrieval
In retrieval-based question answering system (Wang, Hamza, and Florian 2017;Liu et al. 2018;Guo et al. 2019), we retrieve the answer or similar question from a large question-answer pairs. We compute the semantic similar score between question-question pairs or compute the semantic related score of question-answer pairs and then rank them to find the best answer. In this paper we discuss the similar question retrieval. For the similar question retrieval problem, when given a new question in predicting, we get the most similar question in the large question-answer pairs by ranking, then we can return the corresponding answer. We consider this problem as a short sentence ranking problem based on sentence semantic matching, which is also a kind of information retrieval task. Neural information retrieval has developed in several ways to solve this problem. This task is considered to be solved in two step: A fast algorithm like TF-IDF or BM25 to retrieve about tens to hundreds candidate similar questions and then the second step leverage the neural rankers to re-rank the candidate questions by computing the questionquestion pairs similarity scores. So one weakness of this framework with two steps above is that if the first fast retrieval step fails to get the right similar questions, the sec-Figure 1: The pipeline for retrieval-based question answering. The left is the classical pipeline and the right is our approach ond re-rank step is useless. So one way to solve this weakness is to score all the question-question pairs by the neural rankers, however it consumes large amount of time. A full ranking may take several hours. See Fig 1. for the pipeline illustration. In this paper, to get the absolute most similar question on all the questions and solve the problem of long time for ranking all the data, inspired by the idea of (Zhu et al. 2018) and (Zhu et al. 2019), we propose two methods: One is to compute all the semantic vector for all the sentence by the neural ranker offline. And then we encode the new question by the neural ranker online. Tree is an efficient structure for reducing the search space (Silver et al. 2016). To accelerate the speed without losing the ranking accuracy we build a tree by k-means for vector distance computation. Previous research (Qiao et al. 2019; shows that origin BERT(Devlin et al. 2018) can not output good sentence embeddings, so we design the cosine-based loss and the fine-tune architecture of BERT to get better sentence embeddings. Another method is to compute the similarity score by deep model during tree searching. In this paper, the words, distributed representations and sentence embeddings and semantic vector, are all means the final output of the representation-based deep model. In summary our paper has three contributions: First, We fine-tuning BERT and get better sentence embeddings, as the origin embeddings from BERT is bad. Second, To accelerate the predicting speed, we build a specific tree to search on all the embeddings of test data and outperform the baseline. Third, after we build the tree by k-means, we search on the tree while computing the similarity score by interactionbased model and get reasonable results. Problem Statement In this section, we illustrate the short sentence ranking task. In training time, we have a set of question pairs label by 1 for similar and by 0 for not similar. Our goal is to learn a classifier which is able to precisely predict whether the question pair is similar. But we can not follow the same way as sentence pair classification task of BERT, if we want to output the sentence embeddings for each of the sentence. In predicting time, we have a set of questions Q = {q 1 , q 2 , ..., q n } that each have a labeled most similar question in the same set Q. Our goal is to use a question from the question set Q as query and find the top N similar questions from the question set Q. Although the most similar question for the query is the one that we consider to be the most important one in question answering system, but the top N results may be applied to the scenario such as similar question recommendation. In the next section we describe our deep model and the tree building methods to solve this problem. Fine-tune Training In this subsection we describe our fine-tune methods for BERT. We call it representation-based method which fine-tune BERT to get sentence embeddings. We call it interaction-based method which fine-tune BERT to compute similarity score of sentence pairs during tree searching. Representation-based method The sketch view is shown in Fig. 2. We input the two questions to the same BERT without concatenate them and output two vector representation. We adds a pooling operation to the output of BERT to derive a fixed sized sentence embedding. In detail, we use three ways to get the fixed sized representation from BERT: 1. The output of the [CLS] token. We use the output vector of the [CLS] token of BERT for the two input questions. 2. The mean pooling strategy. We compute mean of all output vectors of the BERT last layer and use it as the representation. 3. The max pooling strategy. We take the max value of the output vectors of the BERT last layer and use it as the representation. Then the two output vectors from BERT compute the cosine distance as the input for mean square error loss: loss = M SE(u · v/(||u|| * ||v||), y) where u and v is the two vectors and y is the label. The full algorithm is shown in Algorithm 1. Interaction-based method The fine-tune procedure is the same to the sentence pair classification task of BERT. The sketch view is shown in Fig. 3. Note that the colon in the figure denotes the concatenation operation. We concatenate the two questions to input it to BERT and use cross entropy loss to train. The full algorithm is shown in Algorithm 2. The fine-tuned model inputs the sentence in the tree node and query sentence as sentence pair to output the score. Tree Building In this section we describe our tree building strategies. In our tree, each non-leaf node have several child nodes. The leaf nodes contain the real data and the non-leaf nodes are virtual but undertake the function for searching or undertake the function that lead the path to the leaf nodes. Representation-based method After all the embeddings of test data are computed, we start to build the tree by kmeans. The sketch figure for tree building is shown in Fig. 4. In real the child nodes for each parent may be not that balance. We cluster the embeddings recursively. The sentence embeddings are all in the leaf nodes. The non-leaf node representation is important for the tree search as they pave the way and lead to the right leaf nodes. We use the kmeans clustering centers as the non-leaf node embeddings. We think the clustering centers is a good solution for the non-leaf node representation, as it is hard to get the exact representation from the child nodes for the parent nodes. As we already get all the embeddings of test data, we only need to compute the vector distance during tree searching. Interaction-based method For interaction-based BERT, we first build the tree by sentence embeddings from the representation-based method above and then use the sentence strings as the leaf nodes. We take the nearest 1-5 sentence strings of cluster centers for the non-leaf node. This strategy has been proved to be effective in experiments. Tree Search In this section we describe our tree searching strategies. The two strategies are almost the same. The difference is that representation-based method compute the vector distance at each node but interaction-based method use the deep model to score the string pair at each node. Representation-based method At predicting time, we use beam search from top to down to get the nearest top N vectors for the given query vector from the whole tree. If we set the beam size to N, we first choose the top N nodes from the all the child nodes of first level and then search among the chosen child nodes' child nodes for the second level. Then we choose top N nodes from the second level. The detail beam search strategy is shown in Fig 5. Interaction-based method At predicting time, we compute the score of two sentences by BERT for each node while we are searching the tree. As we take 1-5 sentence for a non-leaf node, we use the max similarity score to decide which non-leaf node is better. The detail beam search strategy is the same as Fig 5. shows. The more sentences that are nearest to the clustering centers we take for one non-leaf node, the more computation time we need to do for a nonleaf node. But the most computation time is consumed at the leaf nodes as leaf node number is much larger than non-leaf node number. Experiments In this section, we describe the datasets, experiments parameter detail and the experimental result. Then, we give a detailed analysis of the model and experiment results. Data Description We evaluate the performance on the Quora Question Pairs datasets. Based on the Quora Question Pairs datasets, we combine the dev data and test data to get a dataset of 20000 question pairs, which contains 10000 pairs with label 1 and 10000 pairs with label 0. After remove the duplicate questions, we get a datasets of 36735 questions. We compute the all embeddings for the 36736 questions in advance. And then we use the 10000 questions which have label 1 as 10000 queries. For each query it compute 36735 cosine distances if we loop all the 36735 questions. We take the top 20 questions for the evaluation of ranking. The training datasets is 384348 question pairs. Fine-tune Training We use the pre-trained BERT-base model file from here 1 . The max sequence length is 64 and the batch size is 32. The hidden dimension of BERT or output representation dimension is 768. We use Adam optimizer with learning rate 2e-5, and a linear learning rate warm-up over 10% of the training data. Tree Building We choose 5,8,10 as clustering number for k-means. We name the trees 5-K tree, 8-K tree and 10-K tree, based on the clustering number. The depth for the tree is 5 levels for 36735 vectors. In predicting time, the 5-K tree is the slowest with best accuracy tree and the 10-K tree is the fastest with worst accuracy tree. The 8-K tree is in the middle of them. Results We Table 1. and Table 2. The compute-all result means we score all the vector pairs from 0 to end sequentially. The vector distance computation of compute-all uses cosine distance and euclidean distance, and k-d tree uses euclidean distance. The speed comparison is shown in Table 4. We count the number of vector distance computation times for representationbased method or the number of scoring times for sentence pair for interaction-based method. Our tree-based methods outperform (Arora, Liang, and Ma 2016) Case Study and Error Analysis We show some examples from the eval results to demonstrate the ability of our methods. result of top 5 for the query question "Who is the best bodybuilder of all time ?" for compute-all and our 10-K tree. The results show that the ranking accuracy losing may be caused by the non-leaf representation's error, as the results of our tree is far from the query question. We even can not find the right result in the retrieved top 20 questions. We think the non-leaf node lead to the wrong children in tree searching. It is the weakness of our tree building strategy. Conclusion and Future Work In this paper, we study the problem of short sentence ranking for question answering. In order to get best similar score in all the questions when given a question as query and accelerate the predicting speed, we propose two methods. The first method is compute the representation for all the questions in advance and build a tree by k-means. The second method is to train a deep model and then use it to compute similarity scores of two sentences during tree searching. The experimental results show that our methods outperform the strong baseline on the short sentence retrieval datasets we construct. The sentence embeddings quality may be improved by better BERT or the XL-Net ) and we will discover more powerful non-leaf node embeddings for the tree search and evaluate on other datasets (Cer et al. 2017), as previous research (Zhu et al. 2018;Zhu et al. 2019) shows that the tree's preformance could reach the performance of compute-all. In conclusion, our goal is to discover better embeddings and better tree structure in the future.
2,144
1908.08326
2969493583
In this paper, we study the problem of short sentence ranking for question answering. In order to get best score for all the sentences when given a query. We compute the representation for all the sentences in advance and leverage k-d tree to accelerate the speed. The experimental results shows that our methods beat the strong baseline of BM25 on large information retrieval corpus. We will compare our experiment results to other representation-based neural rankers in the future. And we will do the experiment of speed comparison between BM25-based and our tree-based retrieval approach.
Sentence embeddings is an important topic in this research area. Skip-Thought @cite_5 input one sentence to predict its previous and next sentence. InferSent @cite_9 outperforms Skip-Thought. @cite_16 is the methods that use unsupervised word vectors @cite_17 to construct the sentence vectors which is a strong baseline. Universal Sentence Encoder @cite_7 present two models for producing sentence embeddings that demonstrate good transfer to a number of other of other NLP tasks.
{ "abstract": [ "We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse transfer tasks. Two variants of the encoding models allow for trade-offs between accuracy and compute resources. For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance. Comparisons are made with baselines that use word level transfer learning via pretrained word embeddings as well as baselines do not use any transfer learning. We find that transfer learning using sentence embeddings tends to outperform word level transfer. With transfer learning via sentence embeddings, we observe surprisingly good performance with minimal amounts of supervised training data for a transfer task. We obtain encouraging results on Word Embedding Association Tests (WEAT) targeted at detecting model bias. Our pre-trained sentence encoding models are made freely available for download and on TF Hub.", "Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks of text, such as sentences, have however not been so successful. Several attempts at learning unsupervised representations of sentences have not reached satisfactory enough performance to be widely adopted. In this paper, we show how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks. Much like how computer vision uses ImageNet to obtain features, which can then be transferred to other tasks, our work tends to indicate the suitability of natural language inference for transfer learning to other NLP tasks. Our encoder is publicly available.", "In online education systems, finding similar exercises is a fundamental task of many applications, such as exercise retrieval and student modeling. Several approaches have been proposed for this task by simply using the specific textual content (e.g. the same knowledge concepts or the similar words) in exercises. However, the problem of how to systematically exploit the rich semantic information embedded in multiple heterogenous data (e.g. texts and images) to precisely retrieve similar exercises remains pretty much open. To this end, in this paper, we develop a novel Multimodal Attention-based Neural Network (MANN) framework for finding similar exercises in large-scale online education systems by learning a unified semantic representation from the heterogenous data. In MANN, given exercises with texts, images and knowledge concepts, we first apply a convolutional neural network to extract image representations and use an embedding layer for representing concepts. Then, we design an attention-based long short-term memory network to learn a unified semantic representation of each exercise in a multimodal way. Here, two attention strategies are proposed to capture the associations of texts and images, texts and knowledge concepts, respectively. Moreover, with a Similarity Attention, the similar parts in each exercise pair are also measured. Finally, we develop a pairwise training strategy for returning similar exercises. Extensive experimental results on real-world data clearly validate the effectiveness and the interpretation power of MANN.", "The success of neural network methods for computing word embeddings has motivated methods for generating semantic embeddings of longer pieces of text, such as sentences and paragraphs. Surprisingly, (ICLR'16) showed that such complicated methods are outperformed, especially in out-of-domain (transfer learning) settings, by simpler methods involving mild retraining of word embeddings and basic linear regression. The method of requires retraining with a substantial labeled dataset such as Paraphrase Database (, 2013). @PARASPLIT The current paper goes further, showing that the following completely unsupervised sentence embedding is a formidable baseline: Use word embeddings computed using one of the popular methods on unlabeled corpus like Wikipedia, represent the sentence by a weighted average of the word vectors, and then modify them a bit using PCA SVD. This weighting improves performance by about 10 to 30 in textual similarity tasks, and beats sophisticated supervised methods including RNN's and LSTM's. It even improves 's embeddings. This simple method should be used as the baseline to beat in future, especially when labeled training data is scarce or nonexistent. @PARASPLIT The paper also gives a theoretical explanation of the success of the above unsupervised method using a latent variable generative model for sentences, which is a simple extension of the model in (TACL'16) with new \"smoothing\" terms that allow for words occurring out of context, as well as high probabilities for words like and, not in all contexts.", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition." ], "cite_N": [ "@cite_7", "@cite_9", "@cite_5", "@cite_16", "@cite_17" ], "mid": [ "2794557536", "2612953412", "2884517879", "2752172973", "2250539671" ] }
Revisiting Semantic Representation and Tree Search for Similar Question Retrieval
In retrieval-based question answering system (Wang, Hamza, and Florian 2017;Liu et al. 2018;Guo et al. 2019), we retrieve the answer or similar question from a large question-answer pairs. We compute the semantic similar score between question-question pairs or compute the semantic related score of question-answer pairs and then rank them to find the best answer. In this paper we discuss the similar question retrieval. For the similar question retrieval problem, when given a new question in predicting, we get the most similar question in the large question-answer pairs by ranking, then we can return the corresponding answer. We consider this problem as a short sentence ranking problem based on sentence semantic matching, which is also a kind of information retrieval task. Neural information retrieval has developed in several ways to solve this problem. This task is considered to be solved in two step: A fast algorithm like TF-IDF or BM25 to retrieve about tens to hundreds candidate similar questions and then the second step leverage the neural rankers to re-rank the candidate questions by computing the questionquestion pairs similarity scores. So one weakness of this framework with two steps above is that if the first fast retrieval step fails to get the right similar questions, the sec-Figure 1: The pipeline for retrieval-based question answering. The left is the classical pipeline and the right is our approach ond re-rank step is useless. So one way to solve this weakness is to score all the question-question pairs by the neural rankers, however it consumes large amount of time. A full ranking may take several hours. See Fig 1. for the pipeline illustration. In this paper, to get the absolute most similar question on all the questions and solve the problem of long time for ranking all the data, inspired by the idea of (Zhu et al. 2018) and (Zhu et al. 2019), we propose two methods: One is to compute all the semantic vector for all the sentence by the neural ranker offline. And then we encode the new question by the neural ranker online. Tree is an efficient structure for reducing the search space (Silver et al. 2016). To accelerate the speed without losing the ranking accuracy we build a tree by k-means for vector distance computation. Previous research (Qiao et al. 2019; shows that origin BERT(Devlin et al. 2018) can not output good sentence embeddings, so we design the cosine-based loss and the fine-tune architecture of BERT to get better sentence embeddings. Another method is to compute the similarity score by deep model during tree searching. In this paper, the words, distributed representations and sentence embeddings and semantic vector, are all means the final output of the representation-based deep model. In summary our paper has three contributions: First, We fine-tuning BERT and get better sentence embeddings, as the origin embeddings from BERT is bad. Second, To accelerate the predicting speed, we build a specific tree to search on all the embeddings of test data and outperform the baseline. Third, after we build the tree by k-means, we search on the tree while computing the similarity score by interactionbased model and get reasonable results. Problem Statement In this section, we illustrate the short sentence ranking task. In training time, we have a set of question pairs label by 1 for similar and by 0 for not similar. Our goal is to learn a classifier which is able to precisely predict whether the question pair is similar. But we can not follow the same way as sentence pair classification task of BERT, if we want to output the sentence embeddings for each of the sentence. In predicting time, we have a set of questions Q = {q 1 , q 2 , ..., q n } that each have a labeled most similar question in the same set Q. Our goal is to use a question from the question set Q as query and find the top N similar questions from the question set Q. Although the most similar question for the query is the one that we consider to be the most important one in question answering system, but the top N results may be applied to the scenario such as similar question recommendation. In the next section we describe our deep model and the tree building methods to solve this problem. Fine-tune Training In this subsection we describe our fine-tune methods for BERT. We call it representation-based method which fine-tune BERT to get sentence embeddings. We call it interaction-based method which fine-tune BERT to compute similarity score of sentence pairs during tree searching. Representation-based method The sketch view is shown in Fig. 2. We input the two questions to the same BERT without concatenate them and output two vector representation. We adds a pooling operation to the output of BERT to derive a fixed sized sentence embedding. In detail, we use three ways to get the fixed sized representation from BERT: 1. The output of the [CLS] token. We use the output vector of the [CLS] token of BERT for the two input questions. 2. The mean pooling strategy. We compute mean of all output vectors of the BERT last layer and use it as the representation. 3. The max pooling strategy. We take the max value of the output vectors of the BERT last layer and use it as the representation. Then the two output vectors from BERT compute the cosine distance as the input for mean square error loss: loss = M SE(u · v/(||u|| * ||v||), y) where u and v is the two vectors and y is the label. The full algorithm is shown in Algorithm 1. Interaction-based method The fine-tune procedure is the same to the sentence pair classification task of BERT. The sketch view is shown in Fig. 3. Note that the colon in the figure denotes the concatenation operation. We concatenate the two questions to input it to BERT and use cross entropy loss to train. The full algorithm is shown in Algorithm 2. The fine-tuned model inputs the sentence in the tree node and query sentence as sentence pair to output the score. Tree Building In this section we describe our tree building strategies. In our tree, each non-leaf node have several child nodes. The leaf nodes contain the real data and the non-leaf nodes are virtual but undertake the function for searching or undertake the function that lead the path to the leaf nodes. Representation-based method After all the embeddings of test data are computed, we start to build the tree by kmeans. The sketch figure for tree building is shown in Fig. 4. In real the child nodes for each parent may be not that balance. We cluster the embeddings recursively. The sentence embeddings are all in the leaf nodes. The non-leaf node representation is important for the tree search as they pave the way and lead to the right leaf nodes. We use the kmeans clustering centers as the non-leaf node embeddings. We think the clustering centers is a good solution for the non-leaf node representation, as it is hard to get the exact representation from the child nodes for the parent nodes. As we already get all the embeddings of test data, we only need to compute the vector distance during tree searching. Interaction-based method For interaction-based BERT, we first build the tree by sentence embeddings from the representation-based method above and then use the sentence strings as the leaf nodes. We take the nearest 1-5 sentence strings of cluster centers for the non-leaf node. This strategy has been proved to be effective in experiments. Tree Search In this section we describe our tree searching strategies. The two strategies are almost the same. The difference is that representation-based method compute the vector distance at each node but interaction-based method use the deep model to score the string pair at each node. Representation-based method At predicting time, we use beam search from top to down to get the nearest top N vectors for the given query vector from the whole tree. If we set the beam size to N, we first choose the top N nodes from the all the child nodes of first level and then search among the chosen child nodes' child nodes for the second level. Then we choose top N nodes from the second level. The detail beam search strategy is shown in Fig 5. Interaction-based method At predicting time, we compute the score of two sentences by BERT for each node while we are searching the tree. As we take 1-5 sentence for a non-leaf node, we use the max similarity score to decide which non-leaf node is better. The detail beam search strategy is the same as Fig 5. shows. The more sentences that are nearest to the clustering centers we take for one non-leaf node, the more computation time we need to do for a nonleaf node. But the most computation time is consumed at the leaf nodes as leaf node number is much larger than non-leaf node number. Experiments In this section, we describe the datasets, experiments parameter detail and the experimental result. Then, we give a detailed analysis of the model and experiment results. Data Description We evaluate the performance on the Quora Question Pairs datasets. Based on the Quora Question Pairs datasets, we combine the dev data and test data to get a dataset of 20000 question pairs, which contains 10000 pairs with label 1 and 10000 pairs with label 0. After remove the duplicate questions, we get a datasets of 36735 questions. We compute the all embeddings for the 36736 questions in advance. And then we use the 10000 questions which have label 1 as 10000 queries. For each query it compute 36735 cosine distances if we loop all the 36735 questions. We take the top 20 questions for the evaluation of ranking. The training datasets is 384348 question pairs. Fine-tune Training We use the pre-trained BERT-base model file from here 1 . The max sequence length is 64 and the batch size is 32. The hidden dimension of BERT or output representation dimension is 768. We use Adam optimizer with learning rate 2e-5, and a linear learning rate warm-up over 10% of the training data. Tree Building We choose 5,8,10 as clustering number for k-means. We name the trees 5-K tree, 8-K tree and 10-K tree, based on the clustering number. The depth for the tree is 5 levels for 36735 vectors. In predicting time, the 5-K tree is the slowest with best accuracy tree and the 10-K tree is the fastest with worst accuracy tree. The 8-K tree is in the middle of them. Results We Table 1. and Table 2. The compute-all result means we score all the vector pairs from 0 to end sequentially. The vector distance computation of compute-all uses cosine distance and euclidean distance, and k-d tree uses euclidean distance. The speed comparison is shown in Table 4. We count the number of vector distance computation times for representationbased method or the number of scoring times for sentence pair for interaction-based method. Our tree-based methods outperform (Arora, Liang, and Ma 2016) Case Study and Error Analysis We show some examples from the eval results to demonstrate the ability of our methods. result of top 5 for the query question "Who is the best bodybuilder of all time ?" for compute-all and our 10-K tree. The results show that the ranking accuracy losing may be caused by the non-leaf representation's error, as the results of our tree is far from the query question. We even can not find the right result in the retrieved top 20 questions. We think the non-leaf node lead to the wrong children in tree searching. It is the weakness of our tree building strategy. Conclusion and Future Work In this paper, we study the problem of short sentence ranking for question answering. In order to get best similar score in all the questions when given a question as query and accelerate the predicting speed, we propose two methods. The first method is compute the representation for all the questions in advance and build a tree by k-means. The second method is to train a deep model and then use it to compute similarity scores of two sentences during tree searching. The experimental results show that our methods outperform the strong baseline on the short sentence retrieval datasets we construct. The sentence embeddings quality may be improved by better BERT or the XL-Net ) and we will discover more powerful non-leaf node embeddings for the tree search and evaluate on other datasets (Cer et al. 2017), as previous research (Zhu et al. 2018;Zhu et al. 2019) shows that the tree's preformance could reach the performance of compute-all. In conclusion, our goal is to discover better embeddings and better tree structure in the future.
2,144
1908.07715
2969758725
In competitive parallel computing, the identical copies of a code in a phase of a sequential program are assigned to processor cores and the result of the fastest core is adopted. In the literature, it is reported that a superlinear speedup can be achieved if there is an enough fluctuation among the execution times consumed by the cores. Competitive parallel computing is a promising approach to use a huge amount of cores effectively. However, there is few theoretical studies on speedups which can be achieved by competitive parallel computing at present. In this paper, we present a behavioral model of competitive parallel computing and provide a means to predict a speedup which competitive parallel computing yields through theoretical analyses and simulations. We also found a sufficient condition to provide a linear speedup which competitive parallel computing yields. More specifically, it is sufficient for the execution times which consumed by the cores to follow an exponential distribution. In addition, we found that the different distributions which have the identical coefficient of variation (CV) do not always provide the identical speedup. While CV is a convenient measure to predict a speedup, it is not enough to provide an exact prediction.
Wolfgang @cite_10 proposes random competition, in which the computations compete using the randomness in search algorithm. Although he analyzes speedups based on the variance of the measured execution times, there is no mention of CV.
{ "abstract": [ "We present a very simple parallel execution model suitable for inference systems with nondeterministic choices (OR-branching points). All the parallel processors solve the same task without any communication. Their programs only differ in the initialization of the random number generator used for branch selection in depth first backtracking search. This model, called random competition, permits us to calculate analytically the parallel performance for arbitrary numbers of processors. This can be done exactly and without any experiments on a parallel machine. Finally, due to their simplicity, competition architectures are easy (and therefore low-priced) to build." ], "cite_N": [ "@cite_10" ], "mid": [ "1774803432" ] }
A sufficient condition for a linear speedup in competitive parallel computing
Multi-core and many-core are in the mainstream of parallel computing and there is a steady increase in the number of their cores. However, in the near future, it is expected that the degree of parallelism is below the number of cores which the hardware provides due to the restriction of the problems to solve or the algorithms to execute [1]. Meanwhile, it is getting more and more difficult to write a parallel program because 1) it is necessary to control a huge amount of flows of program execution and 2) the elements of parallel computing system get diversified over the last decade. In the future, writing a parallel program gets complicated extremely as the number of cores grows [2,3]. To alleviate these above problems, competitive parallel computing or its equivalent are proposed [4,5,6]. In competitive parallel computing, the identical copies of a code in a phase of a sequential program are assigned to processor cores and the result of the fastest core is adopted. It is reported that a superlinear speedup can be achieved if there is an enough fluctuation among the execution times consumed by the cores. Competitive parallel computing has some advantages over conventional cooperative parallel computing; 1) in competitive parallel computing, it is not necessary to parallelize an existing program, and 2) competitive parallel computing is applicable to algorithms which are impossible or difficult to parallelize. Competitive parallel computing is a promising approach to use a huge amount of cores effectively. However, there is few theoretical studies on speedups which can be achieved by competitive parallel computing at present. Specifically, although it is intuitively understood that a larger fluctuation among the execution times, namely, a larger coefficient of variation (CV) provides a larger speedup, to the best of our knowledge, the relation between a CV and a speedup is not evaluated quantitatively in detail. The contributions of this paper are following: • We present the behavioral model of competitive parallel computing and provide a means to predict a speedup which competitive parallel computing yields through theoretical analyses and simulations. • We found a sufficient condition to provide a linear speedup which competitive parallel computing yields. More specifically, it is sufficient for the execution times which consumed by the cores to follow exponential distribution. In addition, we proved that exponential distribution is not the only distribution to achieve a linear speedup. In other words, exponential distribution is not a necessary condition to achieve a linear speedup. • We found that the different distributions which have the identical CV do not always provide the identical speedup. While CV is a convenient measure to predict a speedup, it is not enough to provide an exact prediction. The rest of this paper is organized as follows: Section 2 provides related work. In Section 3, we propose a mathematical model to evaluate competitive parallel computing and present a means to calculate the execution time of competitive parallel computing. In Section 4, we evaluate speedups which competitive parallel computing provides through Monte Carlo simulation based on our proposed model and present the relation between a CV and a speedup quantitatively. Finally, we conclude our study and describe our future work in Section 5. A Mathematical Analysis of Competitive Parallel Computing In this section, we show a behavioral model of competitive parallel computing and describe how to calculate the execution time based on the model. A model of the program execution in competitive parallel computing In general, a sequential program consists of several phases. In this paper, we define the term 'a phase' as a program region which is between two semantic points. For example, a phase is a sentence of a program, a function call, or an iteration of a loop. Typically, the result which is produced in a phase is consumed in the consecutive or later phases. While sequential computing executes a phase on a single core, competitive parallel computing assigns the identical copies of the phase to multiple cores and makes the cores compete. The result of the fastest core is adopted and the other cores are terminated. Then, the flow of control goes to the next phase. In this paper, to model competitive parallel computing which behaves as mentioned above, we consider the minimum model which represents a program consisting of a single phase and running on n cores as shown in Figure 1. The execution times of a phase running on the different cores might be different each other if the cores are assigned to the different algorithms or the identical algorithm with the different parameters. The external factors including cache misses and network delay also produce the fluctuation of the execution times. These cause randomness. In order to model the execution of such a program, we denote the execution time of Core i by X i , i = 1, 2, . . . , n, where {X i } n i=1 are independent and identically distributed random variables (i.i.d. r.v.'s). At this time, the overall execution time of the program is Y n = min(X 1 , X 2 , . . . , X n ) because the execution time of the program is the execution time of the fastest core. In the next section, we obtain the probability distribution which the random variable Y n follows. Calculations of the execution time As mentioned above, we assume that n random variables X 1 , X 2 , . . . , X n is i.i.d. r.v.'s. We denote the cumulative distribution function (CDF) which these n random variables follow by F X (x) = P (X 1 ≤ x) = P (X 2 ≤ x) = · · · = P (X n ≤ x) and also denote its probability density function (PDF) (if exists) by f X (x). Proposition 3.1. The random variable Y n is the minimum among n random variables X 1 , X 2 , . . . , X n , that is, Y n = min(X 1 , X 2 , . . . , X n ). The CDF which Y n follows is as follows: F Yn (y) = 1 − (1 − F X (y)) n . Proof. F Yn (y) = P (Y n ≤ y) = P (min(X 1 , X 2 , . . . , X n ) ≤ y) = 1 − P (min(X 1 , X 2 , . . . , X n ) > y) = 1 − P (X 1 > y, X 2 > y, . . . , X n > y) = 1 − P (X 1 > y)P (X 2 > y) · · · P (X n > y) (∵ independent random variables) = 1 − (1 − P (X 1 ≤ y))(1 − P (X 2 ≤ y)) · · · (1 − P (X n ≤ y)) = 1 − (1 − F X (y))(1 − F X (y)) · · · (1 − F X (y)) = 1 − (1 − F X (y)) n(1) Evaluation We calculated speedups which competitive parallel computing provides through Monte Carlo simulation based on our proposed model. We gave four probability distributions to the model. In this section, we describe the four probability distribution [10,11]. Then, we demonstrate the exact solution in case that random variables which represent the execution time follows exponential distribution. Finally, we show the results of the simulation and discuss the relation between a CV and a speedup. Probability distribution of the execution time The overall execution time may vary significantly depending on the distribution of random variables which represent the execution time for each processor core. In this study, we assume that a random variable follows one of four distributions: exponential distribution, Erlang distribution, hyperexponential distribution, and uniform distribution. In exponential distribution whose parameter is λ, the CDF for X i is assumed in the form F X (x) = P (X i ≤ x) = 1 − e −λx(2) for i = 1, 2, . . . , n, so that the PDF is in the form f X (x) = λe −λx(3) and expected value (mean execution time) becomes E(X i ) = 1 λ . The CDF and the PDF of Erlang distribution are F X (x) = 1 − e −λkx k−1 r=0 (λkx) r r! , f X (x) = (λk) k (k − 1)! x k−1 e −λkx , respectively, where k is the number of phases 1 . The expected value of random variables which follow the above Erlang distribution is also 1 λ . The CDF and the PDF of hyperexponential distribution are F X (x) = 1 − k j=1 C j e −λ j x , f X (x) = k j=1 C j λ j e −λ j x , respectively, where {C j } k j=1 is an arbitrary discrete distribution. We chose parameters of hyperexponential distribution so that all of their expected value is equal to 1 λ as in the above two distributions. As a result, we obtained the PDF of hyperexponential distribution as follows: f X (x) = a 2 λe −aλx + a 4a − 2 λe − a 2a−1 λx ,(4) where a( = 1 2 ) is a real number. The adoption of these distributions for the execution time is based on the following idea. For non-negative random variables with the same expected value, CV is the most useful and popular characteristic parameter for comparing the degree of variation. The CV c(X) for non-negative random variable X is defined by c(X) = V (X) E(X) where V (X) is variance of X, i.e., V (X) = E(X 2 ) − E(X) 2 . It is clear that for fixed value of E(X), as increases the value of c(X), the variance of X also increases. In the field of probability theory, exponential distribution, Erlang distribution, and hyperexponential distribution are the most typical distributions with different CV. It is well known that c(X) = 1 for exponential distribution, c(X) < 1 for Erlang distribution, and c(X) > 1 for hyperexponential distribution. In other words, for the same expected value, Erlang distribution shows lower variance and hyperexponential distribution shows higher variance comparing with exponential distribution. In this paper, we additionally adopt uniform distribution. The CDF and the PDF of uniform distribution are F X (x) = P (X i ≤ x) = x − a b − a , f X (x) = 1 b − a , respectively, where 0 ≤ a < X i ≤ b. The CV of uniform distribution is less than one, that is, c(X i ) = b − a √ 3(b + a) < 1. The exact solution with exponential distribution We show the exact solution of the expected execution time in case that the n random variables follow exponential distribution. Hereafter, we assume that λ = 1 without loss of generality. the execution time for n = 1 In general, the expected value of a random variable X is calculated as follows: E(X) = ∞ 0 xf (x)dx,(5) where f (x) is PDF which the random variable follows. For n = 1, Y 1 = min(X 1 ) = X 1 . Therefore, using Equation 3 and Equation 5, E(Y 1 ) = ∞ 0 xλe −λx dx = 1 λ = 1. the execution time for n > 1 Hereafter, we define a speedup as S n = E(Y 1 )/E(Y n ). Therefore, E(Y n ) = ∞ 0 yf Y (y)dy = 1 λn = 1 n . Consequently, S n = n, that is, a linear speedup. (a) is not a necessary condition for (b): It is sufficient to show another distribution which provides (b). If the random variable Y n follows the distribution which is represented as Equation 4 with nλ instead of λ, namely, f Yn (x) = a 2 nλe −anλx + a 4a − 2 nλe − a 2a−1 nλx , then E(Y n ) = 1 λn = 1 n . This is another example which provides (b) and is different to Equation 6 which is obtained from exponential distribution. Therefore, it is proved that (a) is not a necessary condition for (b). The results of numerical experiments To evaluate the relation between a CV and a speedup, we calculated the execution times with varying the distributions which the random variables follow. We carried out Monte Carlo simulations as shown in Algorithm 1. Input: the number of steps N , the distribution D, the number of processor cores n. Output: the execution time. 1. i ← 0. 2. S ← 0. 3. Substitute the random numbers which follows the distribution D into the n random variables X 1 , X 2 , . . . , X n [12]. 4. Y n ← min(X 1 , X 2 , . . . , X n ). 5. S ← S + Y n . 6. i ← i + 1. if i < N go to Step 3, otherwise go to Step 8. Output S N . We varied the number of cores n = {1, 2, . . . , 100} and defined N as 100,000. We denote exponential distribution, Erlang distribution with parameter k, and hyperexponential distribution with parameter a by M, E k , H 2(a) , respectively, derived from Kendall's notation in queuing theory [13]. Comparing Speedups among various CVs The speedups with varying CV are shown in Figure 2. CVs are shown in Table 1. As a whole, more cores and a larger CV bring a larger speedup. From these results, we theoretically confirmed the fact which is intuitively predicted and is confirmed experimentally by other studies. With the distribution M, a Speedups for one hundred processors with extreme CVs In Section 4.3.1, we found that the different CV provides the different speedup. To explore the relation between a CV and a speedup in more detail, we carried out simulations with varying CV more finely for n = 100. We show the speedups with varying the parameter k of Erlang distribution 2 to 100 in Figure 3. Speedups are 7.68 and 15.14 for 0.58 and 0.71 as CV, respectively. These speedups are possibly acceptable as performance gains for n = 100. Meanwhile, the speedups are lower than 2 with lower CVs. These speedups are unacceptable unless computing resources and electricity are abundantly available. We show the speedups with varying the parameter a of hyperexponential Figure 4. Note that hyperexponential distribution is equivalent to exponential distribution if a = 1. While the speedup is 426.08 for 1.59 as CV, the speedup grows rapidly, that is, as 1,798.56 and 4,975.12 for 1.70 and 1.72 as CV, respectively. If someone finds an application which shows such behavior, a huge performance gain is obtained. Although it might not be realistic to find such an application, it is meaningful to obtain a theoretical perspective for a performance gain. Comparing Speedups when fixing CV Finally, we compare the speedups for Erlang distribution with the ones for uniform distribution, to explore the speedups with the identical CV provided by different distributions. We adopted the parameter k as 3 for Erlang distribution and chose the parameters as a = 0, b = 2 for uniform distribution so that the expected value is 1, which is the same as other distributions. As a result, CV is 1 √ 3 ≈ 0.58 for both Erlang distribution and uniform distribution. We show the speedups with varying the number of cores 1 to 100 in Figure 5. The speedups for uniform distribution are better than the ones for Erlang distribution. The reason for these results is as followings: the shape of graph corresponding to PDF of Erlang distribution is mountain type while the shape corresponding to PDF of uniform distribution is horizontally flat. In other words, the random number which follows Erlang distribution tends to be around the peak of the graph while the probability of generating a random number for uniform distribution is equal to any other numbers. Therefore, the probability for generating a smaller number for uniform distribution is higher than the one for Erlang distribution. In our model, because the smallest random variable which shows the shortest execution time among cores is adopted as the overall execution time, the speedups for uniform distribution are better than the ones for Erlang distribution. These results show that the identical CV does not always yield the identical speedup. While CV is a convenient measure to predict a speedup, it is necessary to consider the distribution as well as CV for a more precise prediction. Conclusion In this paper, we constructed a mathematical model which represents the behavior of competitive parallel computing and theoretically analyzed competitive parallel computing using the model. We investigate sufficient conditions which provide a linear speedup through a theoretical analysis as well as simulations with various kinds of probability distribution. As a consequence, we found that exponential distribution yields a linear speedup and is not the only distribution which yields a linear speedup. This imply that it is possible to find the different distribution which yields a linear distribution and is easier to be realized as a real-world entity than exponential distribution. Although CV is consider as a convenient measure to predict a speedup so far, we found that the identical CV does not always yield the identical speedup through the experiments with the fixed CV. Our future work will include: • to find a better distribution which yields a linear or superlinear speedup than exponential distribution. In other words, such a distribution should be easier to realized as a real-world entity than exponential distribution. • to evaluate our proposed model using applications. More specifically, we compare the predicted speedup which is obtained through a probabilistic analysis of an application with the corresponding measured speedup experimentally.
2,985
1908.07715
2969758725
In competitive parallel computing, the identical copies of a code in a phase of a sequential program are assigned to processor cores and the result of the fastest core is adopted. In the literature, it is reported that a superlinear speedup can be achieved if there is an enough fluctuation among the execution times consumed by the cores. Competitive parallel computing is a promising approach to use a huge amount of cores effectively. However, there is few theoretical studies on speedups which can be achieved by competitive parallel computing at present. In this paper, we present a behavioral model of competitive parallel computing and provide a means to predict a speedup which competitive parallel computing yields through theoretical analyses and simulations. We also found a sufficient condition to provide a linear speedup which competitive parallel computing yields. More specifically, it is sufficient for the execution times which consumed by the cores to follow an exponential distribution. In addition, we found that the different distributions which have the identical coefficient of variation (CV) do not always provide the identical speedup. While CV is a convenient measure to predict a speedup, it is not enough to provide an exact prediction.
Without enough attention to the degree of the variance of the execution time among processors, using naively wastes computing resources. To overcome this problem, Cledat @cite_12 @cite_9 @cite_2 proposes the methods called and . The CV of WalkSAT, one of the application they adopted for evaluation, is less than one and the speedup is worse than a linear speedup. Meanwhile, the CV of another application, namely, MSL motion planning is greater than one and a superlinear speedup is achieved. These results are consistent with our result. Therefore, it is proper to claim that our results reinforce and extend their work.
{ "abstract": [ "With the advent of multi-cores and many-cores, traditional techniques that seek only to improve FLOPS of performance or the degree of parallelism have hit a roadblock with regards to providing even greater performance. In order to surmount this roadblock, techniques should more directly address the underlying design objectives of an application. Specific implementations and algorithmic choices in applications are intended to achieve the underlying realism objectives in the programmer's mind. We identify two specific aspects of this realism that traditional programming and parallelization approaches do not capture and exploit to utilize the growing number of cores. The first aspect is that the goal of minimizing program execution time can be satisfactorily met if the program execution time is low with sufficiently high probability. We exploit the fact that randomized algorithms are available for many commonly used kernels, and that the use of parallelism can achieve very low expected execution times with high probability for these algorithms. This can provide speedups to parts of the application that were hitherto deemed sequential and ignored for extracting performance via multi-cores. The second aspect of realism that we exploit is that important classes of emerging applications, like gaming and interactive visualization, have user-interactivity and responsiveness requirements that are as important as raw performance. Their design goal is to maximize the functionality expressed, while maintaining a high and smooth frame-rate. Therefore, the primary objective for these applications is not to run a fixed computation as fast as possible, but rather to scale the application semantics up or down depending on the resources available. Our framework intends to capture the responsiveness requirements of these applications as they pertain to expressed realism and automatically scale the application semantics expressed on every architecture, including very resource-rich many-cores.", "With core counts on the rise, the sequential components of applications are becoming the major bottleneck in performance scaling as predicted by Amdahl's law. We are therefore faced with the simultaneous problems of occupying an increasing number of cores and speeding up sequential sections. In this work, we reconcile these two seemingly incompatible problems with a novel programming model called N-way. The core idea behind N-way is to benefit from the algorithmic diversity available to express certain key computational steps. By simultaneously launching in parallel multiple ways to solve a given computation, a runtime can just-in-time pick the best (for example the fastest) way and therefore achieve speedup. Previous work has demonstrated the benefits of such an approach but has not addressed its inherent waste. In this work, we focus on providing a mathematically sound learning-based statistical model that can be used by a runtime to determine the optimal balance between resources used and benefits obtainable through N-way. We further describe a dynamic culling mechanism to further reduce resource waste. We present abstractions and a runtime support to cleanly encapsulate the computational-options and monitor their progress. We demonstrate a low-overhead runtime that achieves significant speedup over a range of widely used kernels. Our results demonstrate super-linear speedups in certain cases.", "" ], "cite_N": [ "@cite_9", "@cite_12", "@cite_2" ], "mid": [ "154671064", "1995195328", "" ] }
A sufficient condition for a linear speedup in competitive parallel computing
Multi-core and many-core are in the mainstream of parallel computing and there is a steady increase in the number of their cores. However, in the near future, it is expected that the degree of parallelism is below the number of cores which the hardware provides due to the restriction of the problems to solve or the algorithms to execute [1]. Meanwhile, it is getting more and more difficult to write a parallel program because 1) it is necessary to control a huge amount of flows of program execution and 2) the elements of parallel computing system get diversified over the last decade. In the future, writing a parallel program gets complicated extremely as the number of cores grows [2,3]. To alleviate these above problems, competitive parallel computing or its equivalent are proposed [4,5,6]. In competitive parallel computing, the identical copies of a code in a phase of a sequential program are assigned to processor cores and the result of the fastest core is adopted. It is reported that a superlinear speedup can be achieved if there is an enough fluctuation among the execution times consumed by the cores. Competitive parallel computing has some advantages over conventional cooperative parallel computing; 1) in competitive parallel computing, it is not necessary to parallelize an existing program, and 2) competitive parallel computing is applicable to algorithms which are impossible or difficult to parallelize. Competitive parallel computing is a promising approach to use a huge amount of cores effectively. However, there is few theoretical studies on speedups which can be achieved by competitive parallel computing at present. Specifically, although it is intuitively understood that a larger fluctuation among the execution times, namely, a larger coefficient of variation (CV) provides a larger speedup, to the best of our knowledge, the relation between a CV and a speedup is not evaluated quantitatively in detail. The contributions of this paper are following: • We present the behavioral model of competitive parallel computing and provide a means to predict a speedup which competitive parallel computing yields through theoretical analyses and simulations. • We found a sufficient condition to provide a linear speedup which competitive parallel computing yields. More specifically, it is sufficient for the execution times which consumed by the cores to follow exponential distribution. In addition, we proved that exponential distribution is not the only distribution to achieve a linear speedup. In other words, exponential distribution is not a necessary condition to achieve a linear speedup. • We found that the different distributions which have the identical CV do not always provide the identical speedup. While CV is a convenient measure to predict a speedup, it is not enough to provide an exact prediction. The rest of this paper is organized as follows: Section 2 provides related work. In Section 3, we propose a mathematical model to evaluate competitive parallel computing and present a means to calculate the execution time of competitive parallel computing. In Section 4, we evaluate speedups which competitive parallel computing provides through Monte Carlo simulation based on our proposed model and present the relation between a CV and a speedup quantitatively. Finally, we conclude our study and describe our future work in Section 5. A Mathematical Analysis of Competitive Parallel Computing In this section, we show a behavioral model of competitive parallel computing and describe how to calculate the execution time based on the model. A model of the program execution in competitive parallel computing In general, a sequential program consists of several phases. In this paper, we define the term 'a phase' as a program region which is between two semantic points. For example, a phase is a sentence of a program, a function call, or an iteration of a loop. Typically, the result which is produced in a phase is consumed in the consecutive or later phases. While sequential computing executes a phase on a single core, competitive parallel computing assigns the identical copies of the phase to multiple cores and makes the cores compete. The result of the fastest core is adopted and the other cores are terminated. Then, the flow of control goes to the next phase. In this paper, to model competitive parallel computing which behaves as mentioned above, we consider the minimum model which represents a program consisting of a single phase and running on n cores as shown in Figure 1. The execution times of a phase running on the different cores might be different each other if the cores are assigned to the different algorithms or the identical algorithm with the different parameters. The external factors including cache misses and network delay also produce the fluctuation of the execution times. These cause randomness. In order to model the execution of such a program, we denote the execution time of Core i by X i , i = 1, 2, . . . , n, where {X i } n i=1 are independent and identically distributed random variables (i.i.d. r.v.'s). At this time, the overall execution time of the program is Y n = min(X 1 , X 2 , . . . , X n ) because the execution time of the program is the execution time of the fastest core. In the next section, we obtain the probability distribution which the random variable Y n follows. Calculations of the execution time As mentioned above, we assume that n random variables X 1 , X 2 , . . . , X n is i.i.d. r.v.'s. We denote the cumulative distribution function (CDF) which these n random variables follow by F X (x) = P (X 1 ≤ x) = P (X 2 ≤ x) = · · · = P (X n ≤ x) and also denote its probability density function (PDF) (if exists) by f X (x). Proposition 3.1. The random variable Y n is the minimum among n random variables X 1 , X 2 , . . . , X n , that is, Y n = min(X 1 , X 2 , . . . , X n ). The CDF which Y n follows is as follows: F Yn (y) = 1 − (1 − F X (y)) n . Proof. F Yn (y) = P (Y n ≤ y) = P (min(X 1 , X 2 , . . . , X n ) ≤ y) = 1 − P (min(X 1 , X 2 , . . . , X n ) > y) = 1 − P (X 1 > y, X 2 > y, . . . , X n > y) = 1 − P (X 1 > y)P (X 2 > y) · · · P (X n > y) (∵ independent random variables) = 1 − (1 − P (X 1 ≤ y))(1 − P (X 2 ≤ y)) · · · (1 − P (X n ≤ y)) = 1 − (1 − F X (y))(1 − F X (y)) · · · (1 − F X (y)) = 1 − (1 − F X (y)) n(1) Evaluation We calculated speedups which competitive parallel computing provides through Monte Carlo simulation based on our proposed model. We gave four probability distributions to the model. In this section, we describe the four probability distribution [10,11]. Then, we demonstrate the exact solution in case that random variables which represent the execution time follows exponential distribution. Finally, we show the results of the simulation and discuss the relation between a CV and a speedup. Probability distribution of the execution time The overall execution time may vary significantly depending on the distribution of random variables which represent the execution time for each processor core. In this study, we assume that a random variable follows one of four distributions: exponential distribution, Erlang distribution, hyperexponential distribution, and uniform distribution. In exponential distribution whose parameter is λ, the CDF for X i is assumed in the form F X (x) = P (X i ≤ x) = 1 − e −λx(2) for i = 1, 2, . . . , n, so that the PDF is in the form f X (x) = λe −λx(3) and expected value (mean execution time) becomes E(X i ) = 1 λ . The CDF and the PDF of Erlang distribution are F X (x) = 1 − e −λkx k−1 r=0 (λkx) r r! , f X (x) = (λk) k (k − 1)! x k−1 e −λkx , respectively, where k is the number of phases 1 . The expected value of random variables which follow the above Erlang distribution is also 1 λ . The CDF and the PDF of hyperexponential distribution are F X (x) = 1 − k j=1 C j e −λ j x , f X (x) = k j=1 C j λ j e −λ j x , respectively, where {C j } k j=1 is an arbitrary discrete distribution. We chose parameters of hyperexponential distribution so that all of their expected value is equal to 1 λ as in the above two distributions. As a result, we obtained the PDF of hyperexponential distribution as follows: f X (x) = a 2 λe −aλx + a 4a − 2 λe − a 2a−1 λx ,(4) where a( = 1 2 ) is a real number. The adoption of these distributions for the execution time is based on the following idea. For non-negative random variables with the same expected value, CV is the most useful and popular characteristic parameter for comparing the degree of variation. The CV c(X) for non-negative random variable X is defined by c(X) = V (X) E(X) where V (X) is variance of X, i.e., V (X) = E(X 2 ) − E(X) 2 . It is clear that for fixed value of E(X), as increases the value of c(X), the variance of X also increases. In the field of probability theory, exponential distribution, Erlang distribution, and hyperexponential distribution are the most typical distributions with different CV. It is well known that c(X) = 1 for exponential distribution, c(X) < 1 for Erlang distribution, and c(X) > 1 for hyperexponential distribution. In other words, for the same expected value, Erlang distribution shows lower variance and hyperexponential distribution shows higher variance comparing with exponential distribution. In this paper, we additionally adopt uniform distribution. The CDF and the PDF of uniform distribution are F X (x) = P (X i ≤ x) = x − a b − a , f X (x) = 1 b − a , respectively, where 0 ≤ a < X i ≤ b. The CV of uniform distribution is less than one, that is, c(X i ) = b − a √ 3(b + a) < 1. The exact solution with exponential distribution We show the exact solution of the expected execution time in case that the n random variables follow exponential distribution. Hereafter, we assume that λ = 1 without loss of generality. the execution time for n = 1 In general, the expected value of a random variable X is calculated as follows: E(X) = ∞ 0 xf (x)dx,(5) where f (x) is PDF which the random variable follows. For n = 1, Y 1 = min(X 1 ) = X 1 . Therefore, using Equation 3 and Equation 5, E(Y 1 ) = ∞ 0 xλe −λx dx = 1 λ = 1. the execution time for n > 1 Hereafter, we define a speedup as S n = E(Y 1 )/E(Y n ). Therefore, E(Y n ) = ∞ 0 yf Y (y)dy = 1 λn = 1 n . Consequently, S n = n, that is, a linear speedup. (a) is not a necessary condition for (b): It is sufficient to show another distribution which provides (b). If the random variable Y n follows the distribution which is represented as Equation 4 with nλ instead of λ, namely, f Yn (x) = a 2 nλe −anλx + a 4a − 2 nλe − a 2a−1 nλx , then E(Y n ) = 1 λn = 1 n . This is another example which provides (b) and is different to Equation 6 which is obtained from exponential distribution. Therefore, it is proved that (a) is not a necessary condition for (b). The results of numerical experiments To evaluate the relation between a CV and a speedup, we calculated the execution times with varying the distributions which the random variables follow. We carried out Monte Carlo simulations as shown in Algorithm 1. Input: the number of steps N , the distribution D, the number of processor cores n. Output: the execution time. 1. i ← 0. 2. S ← 0. 3. Substitute the random numbers which follows the distribution D into the n random variables X 1 , X 2 , . . . , X n [12]. 4. Y n ← min(X 1 , X 2 , . . . , X n ). 5. S ← S + Y n . 6. i ← i + 1. if i < N go to Step 3, otherwise go to Step 8. Output S N . We varied the number of cores n = {1, 2, . . . , 100} and defined N as 100,000. We denote exponential distribution, Erlang distribution with parameter k, and hyperexponential distribution with parameter a by M, E k , H 2(a) , respectively, derived from Kendall's notation in queuing theory [13]. Comparing Speedups among various CVs The speedups with varying CV are shown in Figure 2. CVs are shown in Table 1. As a whole, more cores and a larger CV bring a larger speedup. From these results, we theoretically confirmed the fact which is intuitively predicted and is confirmed experimentally by other studies. With the distribution M, a Speedups for one hundred processors with extreme CVs In Section 4.3.1, we found that the different CV provides the different speedup. To explore the relation between a CV and a speedup in more detail, we carried out simulations with varying CV more finely for n = 100. We show the speedups with varying the parameter k of Erlang distribution 2 to 100 in Figure 3. Speedups are 7.68 and 15.14 for 0.58 and 0.71 as CV, respectively. These speedups are possibly acceptable as performance gains for n = 100. Meanwhile, the speedups are lower than 2 with lower CVs. These speedups are unacceptable unless computing resources and electricity are abundantly available. We show the speedups with varying the parameter a of hyperexponential Figure 4. Note that hyperexponential distribution is equivalent to exponential distribution if a = 1. While the speedup is 426.08 for 1.59 as CV, the speedup grows rapidly, that is, as 1,798.56 and 4,975.12 for 1.70 and 1.72 as CV, respectively. If someone finds an application which shows such behavior, a huge performance gain is obtained. Although it might not be realistic to find such an application, it is meaningful to obtain a theoretical perspective for a performance gain. Comparing Speedups when fixing CV Finally, we compare the speedups for Erlang distribution with the ones for uniform distribution, to explore the speedups with the identical CV provided by different distributions. We adopted the parameter k as 3 for Erlang distribution and chose the parameters as a = 0, b = 2 for uniform distribution so that the expected value is 1, which is the same as other distributions. As a result, CV is 1 √ 3 ≈ 0.58 for both Erlang distribution and uniform distribution. We show the speedups with varying the number of cores 1 to 100 in Figure 5. The speedups for uniform distribution are better than the ones for Erlang distribution. The reason for these results is as followings: the shape of graph corresponding to PDF of Erlang distribution is mountain type while the shape corresponding to PDF of uniform distribution is horizontally flat. In other words, the random number which follows Erlang distribution tends to be around the peak of the graph while the probability of generating a random number for uniform distribution is equal to any other numbers. Therefore, the probability for generating a smaller number for uniform distribution is higher than the one for Erlang distribution. In our model, because the smallest random variable which shows the shortest execution time among cores is adopted as the overall execution time, the speedups for uniform distribution are better than the ones for Erlang distribution. These results show that the identical CV does not always yield the identical speedup. While CV is a convenient measure to predict a speedup, it is necessary to consider the distribution as well as CV for a more precise prediction. Conclusion In this paper, we constructed a mathematical model which represents the behavior of competitive parallel computing and theoretically analyzed competitive parallel computing using the model. We investigate sufficient conditions which provide a linear speedup through a theoretical analysis as well as simulations with various kinds of probability distribution. As a consequence, we found that exponential distribution yields a linear speedup and is not the only distribution which yields a linear speedup. This imply that it is possible to find the different distribution which yields a linear distribution and is easier to be realized as a real-world entity than exponential distribution. Although CV is consider as a convenient measure to predict a speedup so far, we found that the identical CV does not always yield the identical speedup through the experiments with the fixed CV. Our future work will include: • to find a better distribution which yields a linear or superlinear speedup than exponential distribution. In other words, such a distribution should be easier to realized as a real-world entity than exponential distribution. • to evaluate our proposed model using applications. More specifically, we compare the predicted speedup which is obtained through a probabilistic analysis of an application with the corresponding measured speedup experimentally.
2,985
1908.04431
2967036800
The interconnectivity of cyber and physical systems and Internet of things has created ubiquitous concerns of cyber threats for enterprise system managers. It is common that the asset owners and enterprise network operators need to work with cybersecurity professionals to manage the risk by remunerating them for their efforts that are not directly observable. In this paper, we use a principal-agent framework to capture the service relationships between the two parties, i.e., the asset owner (principal) and the cyber risk manager (agent). Specifically, we consider a dynamic systemic risk management problem with asymmetric information where the principal can only observe cyber risk outcomes of the enterprise network rather than directly the efforts that the manager expends on protecting the resources. Under this information pattern, the principal aims to minimize the systemic cyber risks by designing a dynamic contract that specifies the compensation flows and the anticipated efforts of the manager by taking into account his incentives and rational behaviors. We formulate a bi-level mechanism design problem for dynamic contract design within the framework of a class of stochastic differential games. We show that the principal has rational controllability of the systemic risk by designing an incentive compatible estimator of the agent's hidden efforts. We characterize the optimal solution by reformulating the problem as a stochastic optimal control program which can be solved using dynamic programming. We further investigate a benchmark scenario with complete information and identify conditions that yield zero information rent and lead to a new certainty equivalence principle for principal-agent problems. Finally, case studies over networked systems are carried out to illustrate the theoretical results obtained.
Cybersecurity becomes a critical issue due to the large-scale deployment of smart devices and their integration with information and communication techologies (ICTs) @cite_6 @cite_54 . Hence, security risk management is an important task which has been investigated in different research fields, such as communications and infrastructures @cite_29 @cite_43 , cloud computing @cite_33 and IoT @cite_16 . The interconnections between nodes and devices make the risk management a challenge problem as the cyber risk can propogate and escalate into systemic risk @cite_35 , and hence the interdependent security risk analysis is necessary @cite_37 . Managing systemic risk is nontrivial as demonstrated in financial systems @cite_39 , critical infrastructures @cite_44 , and communication networks @cite_42 . In a network with a small number of agents, graph-theoretic methods have been widely adopted to model the strategic interactions and risk interdependencies between agents @cite_39 @cite_24 . When the number of nodes becomes large, @cite_7 has proposed a mean-field game approach where a representative agent captures the system dynamics. Different from @cite_51 @cite_17 in minimizing the static systemic risk at equilibrium, we focus in this paper on a mechanism design problem that can reduce the systemic risks by understanding the system dynamics.
{ "abstract": [ "", "With the remarkable growth of the Internet and communication technologies over the past few decades, Internet of Things (IoTs) is enabling the ubiquitous connectivity of heterogeneous physical devices with software, sensors, and actuators. IoT networks are naturally two-layer with the cloud and cellular networks coexisting with the underlaid device-todevice (D2D) communications. The connectivity of IoTs plays an important role in information dissemination for missioncritical and civilian applications. However, IoT communication networks are vulnerable to cyber attacks including the denialof- service (DoS) and jamming attacks, resulting in link removals in IoT network. In this work, we develop a heterogeneous IoT network design framework in which a network designer can add links to provide additional communication paths between two nodes or secure links against attacks by investing resources. By anticipating the strategic cyber attacks, we characterize the optimal design of secure IoT network by first providing a lower bound on the number of links a secure network requires for a given budget of protected links, and then developing a method to construct networks that satisfy the heterogeneous network design specifications. Therefore, each layer of the designed heterogeneous IoT network is resistant to a predefined level of malicious attacks with minimum resources. Finally, we provide case studies on the Internet of Battlefield Things (IoBT) to corroborate and illustrate our obtained results", "Cloud computing is an evolving paradigm with tremendous momentum, but its unique aspects exacerbate security and privacy challenges. This article explores the roadblocks and solutions to providing a trustworthy cloud computing environment.", "We propose a simple model of inter-bank borrowing and lending where the evolution of the log-monetary reserves of @math banks is described by a system of diffusion processes coupled through their drifts in such a way that stability of the system depends on the rate of inter-bank borrowing and lending. Systemic risk is characterized by a large number of banks reaching a default threshold by a given time horizon. Our model incorporates a game feature where each bank controls its rate of borrowing lending to a central bank. The optimization reflects the desire of each bank to borrow from the central bank when its monetary reserve falls below a critical level or lend if it rises above this critical level which is chosen here as the average monetary reserve. Borrowing from or lending to the central bank is also subject to a quadratic cost at a rate which can be fixed by the regulator. We solve explicitly for Nash equilibria with finitely many players, and we show that in this model the central bank acts as a clearing house, adding liquidity to the system without affecting its systemic risk. We also study the corresponding Mean Field Game in the limit of large number of banks in the presence of a common noise.", "The cloud-enabled Internet of controlled things (IoCT) envisions a network of sensors, controllers, and actuators connected through a local cloud in order to intelligently control physical devices. Because cloud services are vulnerable to advanced persistent threats (APTs), each device in the IoCT must strategically decide whether to trust cloud services that may be compromised. In this paper, we present iSTRICT, an interdependent strategic trust mechanism for the cloud-enabled IoCT. iSTRICT is composed of three interdependent layers. In the cloud layer, iSTRICT uses FlipIt games to conceptualize APTs. In the communication layer, it captures the interaction between devices and the cloud using signaling games. In the physical layer, iSTRICT uses optimal control to quantify the utilities in the higher level games. Best response dynamics link the three layers in an overall “game-of-games,” for which the outcome is captured by a concept called Gestalt Nash equilibrium (GNE). We prove the existence of a GNE under a set of natural assumptions and develop an adaptive algorithm to iteratively compute the equilibrium. Finally, we apply iSTRICT to trust management for autonomous vehicles that rely on measurements from remote sources. We show that strategic trust in the communication layer achieves a worst-case probability of compromise for any attack and defense costs in the cyber layer.", "In this paper, we introduce a distributed dynamic routing algorithm in multi-hop cognitive radio (CR) networks, in which secondary users (SUs) want to minimize their interference to the primary users (PUs) while keeping the delay along the route low. We employ a cognitive pilot channel (CPC) for SUs to be able to access the information about PUs, including PUs' locations and channel conditions. Medial axis with a relaxation factor is used as a reference path for the routing, along which we develop a hierarchical structure for multiple sources to reach their destinations. We introduce a temporal and spatial dynamic non-cooperative game to model the interactions among the SUs as well as their influences on the PUs, and obtain by backward induction a set of mixed (behavioral) Nash equilibrium strategies. We also employ a multi-stage fictitious play learning algorithm for distributed routing, which minimizes the overall interference from the SUs to the PUs, as well as the average packet delay along the route from the SU nodes to their destinations. Simulation results show that our proposed algorithm can avoid congestion in the CR network and minimize delay while keeping the interference level low.", "This paper reviews the state of the art in cyber security risk assessment of Supervisory Control and Data Acquisition (SCADA) systems. We select and in-detail examine twenty-four risk assessment methods developed for or applied in the context of a SCADA system. We describe the essence of the methods and then analyse them in terms of aim; application domain; the stages of risk management addressed; key risk management concepts covered; impact measurement; sources of probabilistic data; evaluation and tool support. Based on the analysis, we suggest an intuitive scheme for the categorisation of cyber security risk assessment methods for SCADA systems. We also outline five research challenges facing the domain and point out the approaches that might be taken.", "Internet of Things (IoT) is characterized by heterogeneous technologies, which concur to the provisioning of innovative services in various application domains. In this scenario, the satisfaction of security and privacy requirements plays a fundamental role. Such requirements include data confidentiality and authentication, access control within the IoT network, privacy and trust among users and things, and the enforcement of security and privacy policies. Traditional security countermeasures cannot be directly applied to IoT technologies due to the different standards and communication stacks involved. Moreover, the high number of interconnected devices arises scalability issues; therefore a flexible infrastructure is needed able to deal with security threats in such a dynamic environment. In this survey we present the main research challenges and the existing solutions in the field of IoT security, identifying open issues, and suggesting some hints for future research.", "We provide a survey of 31 quantitative measures of systemic risk in the economics and finance literature, chosen to span key themes and issues in systemic risk measurement and management. We motivate these measures from the supervisory, research, and data perspectives in the main text and present concise definitions of each risk measure—including required inputs, expected outputs, and data requirements—in an extensive Supplemental Appendix. To encourage experimentation and innovation among as broad an audience as possible, we have developed an open-source Matlab® library for most of the analytics surveyed, which, once tested, will be accessible through the Office of Financial Research (OFR) at http: www.treasury.gov initiatives wsr ofr Pages default.aspx.", "Our modern era is characterized by a large-scale web of interconnected and interdependent economic and infrastructure systems, coupled with threats of terrorism. This paper demonstrates the value of introducing interdependency analysis into various phases of risk assessment and management through application of the Inoperability Input–Output Model (IIM). The IIM estimates the cascading inoperability and economic losses that result from interdependencies within large-scale economic and infrastructure systems. Based on real data and the Nobel Prize-winning W. Leontief economic model, the IIM is a computationally efficient, inexpensive, holistic method for estimating economic impacts. Three illustrative case studies are presented. The first and second illustrate how the supply- and demand-side IIM is used to calculate higher-order effects from attacks to vulnerabilities and implementation of risk management policies in large-scale economic systems. The final case study illustrates a more general use for interdependency analysis: to evaluate risk management options against multiple objectives. This study calculates a Pareto-optimal or efficient frontier of solutions by integrating a simplified model of the costs of recovery to the Power sector derived from open-source data with the IIM. Through these case studies, which use a database from the Bureau of Economic Analysis, we illustrate the value of interdependency analysis in the risk assessment and management process as an integral part of systems engineering. © 2005 Wiley Periodicals, Inc. Syst Eng 8: 323–341, 2005", "", "We study cascades of failures in a network of interdependent financial organizations: how discontinuous changes in asset values (e.g., defaults and shutdowns) trigger further failures, and how this depends on network structure. Integration (greater dependence on counterparties) and diversification (more counterparties per organization) have different, nonmonotonic effects on the extent of cascades. Diversification connects the network initially, permitting cascades to travel; but as it increases further, organizations are better insured against one another's failures. Integration also faces trade-offs: increased dependence on other organizations versus less sensitivity to own investments. Finally, we illustrate the model with data on European debt cross-holdings.", "With the increasing connectivity enabled by the Internet of Things (IoT), security becomes a critical concern, and users should invest to secure their IoT applications. Due to the massive devices in the IoT network, users cannot be aware of the security policies taken by all its connected neighbors. Instead, a user makes security decisions based on the cyber risks that he perceives by observing a selected number of nodes. To this end, we propose a model which incorporates the limited attention or bounded rationality nature of players in the IoT. Specifically, each individual builds a sparse cognitive network of nodes to respond to. Based on this simplified cognitive network representation, each user then determines his security management policy by minimizing his own real-world security cost. The bounded rational decision-makings of players and their cognitive network formations are interdependent and thus should be addressed in a holistic manner. We establish a games-in-games framework and propose a Gestalt Nash equilibrium (GNE) solution concept to characterize the decisions of agents and quantify their risk of bounded perception due to the limited attention. In addition, we design a proximal-based iterative algorithm to compute the GNE. With case studies of smart communities, the designed algorithm can successfully identify the critical users whose decisions need to be taken into account by the other users during the security management.", "We consider default by firms that are part of a single clearing mechanism. The obligations of all firms within the system are determined simultaneously in a fashion consistent with the priority of debt claims and the limited liability of equity. We first show, via a fixed-point argument, that there always exists a \"clearing payment vector\" that clears the obligations of the members of the clearing system; under mild regularity conditions, this clearing vector is unique. Next, we develop an algorithm that both clears the financial system in a computationally efficient fashion and provides information on the systemic risk faced by the individual system firms. Finally, we produce qualitative comparative statics for financial systems. These comparative statics imply that, in contrast to single-firm results, even unsystematic, nondissipative shocks to the system will lower the total value of the system and may lower the value of the equity of some of the individual system firms.", "This paper argues that the extent of financial contagion exhibits a form of phase transition: as long as the magnitude of negative shocks affecting financial institutions are sufficiently small, a more densely connected financial network (corresponding to a more diversified pattern of interbank liabilities) enhances financial stability. However, beyond a certain point, dense interconnections serve as a mechanism for the propagation of shocks, leading to a more fragile financial system. Our results thus highlight that the same factors that contribute to resilience under certain conditions may function as significant sources of systemic risk under others. (JEL D85, E44, G21, G28, L14)" ], "cite_N": [ "@cite_35", "@cite_37", "@cite_33", "@cite_7", "@cite_54", "@cite_29", "@cite_42", "@cite_6", "@cite_39", "@cite_44", "@cite_43", "@cite_24", "@cite_16", "@cite_51", "@cite_17" ], "mid": [ "", "2736486587", "2134894205", "2013403206", "2963077384", "2031857436", "2131060714", "2104927807", "2114493251", "2148238802", "2949848161", "2132442820", "2945487542", "2162337502", "2045874447" ] }
Dynamic Contract Design for Systemic Cyber Risk Management of Interdependent Enterprise Networks
Cybersecurity is a critical issue in modern enterprise networks due to the adoption of advanced technologies, e.g., Internet of things (IoT), cloud and data centers, and supervisory control and data acquisition (SCADA) system, which create abundant surfaces for cyber attacks [19,37,46]. Due to the interconnections between nodes in the network, the cyber risk can propagate and escalate into systemic risks, which have been a major contributor to massive spreading of Mirai botnets, phishing messages, and ransomware, causing information breaches and financial losses. In addition, systemic risks are highly dynamic by nature as the network faces a continuous flow of cybersecurity incidents. Hence, it becomes critical for the network and asset owner to protect resources from cyber attacks. The complex interdependencies between nodes and fast evolution nature of threats have made it challenging to mitigate systemic risks of enterprise network and thus requires expert knowledge from cyber domains. The asset owners or system operators need to delegate tasks of risk management including security hardening and risk mitigation to security professionals. As depicted in Fig. 1, the owner can be viewed as a principal who employs a security professional to fulfill tasks that include monitoring the network, patching the software and devices, and recovering machines from failures. The security professionals can be viewed as an agent whose efforts are remunerated by the principal. This principal-agent type of interaction models the service relationships between the two parties. The effort of the agent can be measured by the hours he spends on the security tasks. Moreover, the amount of allocated effort has a direct impact on the systemic cyber risk. For example, with more frequent scans on suspicious files and the Internet traffic at each node, the cyber risk becomes low and less likely to spread. An agent plays an important role in systemic risk as he can determine the amount of his effort and the way of distributing efforts on protecting nodes over the network. Hence, it is essential for the principal to incentivize the agent to distribute his resources desirably to protect the network. In the cyber risk management of enterprise network, one distinction is the lack of knowledge of the principal about the effort spent by the agent. The principal is only able to observe risk outcomes, e.g., the denial or failures of services and conspicuous performance degradation. Moreover, due to the randomness in the cyber network, e.g., the biased assessment of risks and the unknown attack behaviors, the cyber risk evolves under uncertainties, making it difficult for the principal to infer the exact effort of the agent from the observations. This type of incomplete information structure is called moral hazard in contracts, under which the asset owner aims to minimize the systemic cyber risk by providing sufficient incentives to the risk manager through a dynamic contract that specifies the compensation flows and suggested effort, while the risk manager's objective is to maximize his payoff with minimum effort by responding to the agreed contract. Fig. 1 Systemic cyber risk management for enterprise network. The asset owner (principal) delegates the risk management tasks, e.g., network monitoring and software patching, to security professionals (agents) by designing a contract which specifies the remuneration schemes. The amount of remuneration is directly related to the systemic risk outcome of the network. The dynamic principal-agent problem has an asymmetric information structure in which the risk manager determines his effort over time, while this effort is hidden to or unobservable by the asset owner. This information structure makes the contract design a challenging decision making problem. Conventional methods to address problems of incomplete information include information state based separation principle [34,35] and belief update scheme [23]. However, these methods cannot be directly applied to design an optimal contract for the players. To address this challenge, we develop a systematic solution methodology which includes an estimation phase, a verification phase, and a control phase. Specifically, we first anticipate the risk manager's optimal effort based on the systemic risk outcome by designing an estimator for the principal. Then, we show that the principal has rational controllability of the systemic risk by verifying that the estimated effort is incentive compatible. Finally, we transform the problem using decision variables that adapt to the principal's information set and obtain the solution by solving a reformulated standard stochastic control program. The optimal dynamic mechanism design (ODMD) includes the compensation flows and the suggested effort. The designed optimal dynamic contract includes the compensations for direct cost of effort, discounted future revenue, cyber risk uncertainties, as well as incentive provisions. Furthermore, under the incentive compatible contract, the risk manager's behavior is strategically neutral in the sense that his current action depends solely on the present stage's cost. The policies of the optimal contract can be determined by solving a stochastic optimal control problem. Under mild conditions, the decision variables associated with the suggested effort and the compensation can be solved in parallel, leading to a separation principle for dynamic mechanism design. As a benchmark problem for comparison, we further investigate the dynamic contract under full information where the principal can fully observe the agent's effort. In general cases, we show that there is a positive information rent quantifying the difference of principal's objective value between the contracts designed under incomplete information and full information. In addition, we identify conditions under which the information rent is degenerated to zero, yielding a certainty equivalence principle in which the mechanism designs under full and asymmetric information become identical. For example, the hidden-action impact is absent in the linear quadratic (LQ) case where the principal achieves a perfect estimation and control of the risk manager's dynamic effort. The incentive provided by the principal to the agent is critical for mitigating the cyber risk. Without sufficient control effort, the risk would grow and propagate over the network. Under the optimal dynamic contract, both the systemic cyber risk and adopted effort decrease over time. Moreover, the effort converges to a positive constant and the systemic risk can remain at a low level. Furthermore, a higher network connectivity requires the agent to spend more effort to reduce the systemic cyber risk. In the linear quadratic (LQ) scenario, we observe that the nodes in the cyber network have self-accountability, i.e., the amount of effort allocated on each node depends only on its risk influences on other nodes and is independent of exogenous risks coming from neighboring nodes. This observation enables large-scale implementation of distributed risk mitigation policy by determining the outer degrees of the nodes. The contributions of this work are summarized as follows. 1) We formulate a dynamic mechanism design problem for systemic cyber risk management of enterprise networks under hidden-action type of incomplete information. 2) We provide a systematic methodology to characterize the optimal mechanism design by transforming the problem into a stochastic optimal control problem with compatible information structures. 3) We define the concept of "rational controllability" to capture the feature of indirect control of cyber risks by the principal, and identify the explicit conditions under which the designed dynamic contract is incentive compatible. 4) We identify a separation principle for dynamic contract design under mild conditions, where the estimation variable capturing the suggested risk management effort and the control variable specifying the compensation can be determined separately. 5) We reveal a certainty equivalence principle for a class of dynamic mechanism design problems where the information rent is zero, i.e., the contracts designed under asymmetric and full information cases coincide. 6) We observe that larger enterprise network connectivity and risk dependency strength require the principal to provide more incentives to the agent. Under the optimal contract in the LQ case, the allocated effort depends on the nodes' outer degree, leading to a self-accountable and distributed risk mitigation scheme. Organization of the Paper The paper is organized as follows. We formulate the systemic cyber risk management problem in Section 2. Section 3 analyzes the dynamic contract forms and the incentive constraints. Section 4 reformulates the principal's problem and solves a linear quadratic case explicitly. Section 5 presents a complete-information benchmark scenario for comparison. Section 6 presents examples to illustrate the dynamic contract design for systemic risk management. Section 7 concludes the paper. Problem Formulation This section formulates the dynamic systemic cyber risk management problem of enterprise networks under asymmetric information using a principal-agent framework, and presents an overview of the adopted methodology. Systemic Cyber Risk Management An enterprise network is comprised of a set N of nodes, where N = {1, 2, ..., N}. Due to the interdependencies among different nodes and fast changing nature of the threats, mitigating the systemic cyber risk is a challenging task which requires expertise from cybersecurity professionals. For example, to reduce the enterprise network vulnerability, it requires a constant monitoring of the Internet traffic into and out of the system, regular patching and updating of the device software, and continuous traffic scanning for intrusion detection. The principal 1 can delegate the risk management tasks over a time period [0, T ] to a professional manager. The cyber risk of each node depends on the level of compliance with security criteria, the number of vulnerabilities of the software and hardware assets, the system configurations, and the concerned threat models [43]. The risk also evolves over time Fig. 2 Systemic cyber risk management of an enterprise network containing two nodes. The cyber risk at node i is denoted by Y i t and the applied risk manager's effort is E i t , i ∈ {1, 2}. The cyber risk at each node depends on its system configuration, the attack model, and the risk manager's effort. Note that the cyber risk can propagate due to the connections between nodes. as the enterprise node constantly updates its software, introduces new functionalities, and interconnects with other nodes. We let Y i t ∈ R be the state of node i ∈ N to capture the risk of each node that maps the system configurations at time t and the threat models to the associated risk. For example, under the advanced persistent threat (APT) type of cyber attacks, one can assess the node's risk using FlipIt game model in which the defender strategically configures the system by reclaiming the control of the node with some frequencies [49]. The FlipIt game outcome yields node's risk which is the expected proportion of time that the node may be compromised by the adversary. As the nodes in the enterprise network are connected, their risks become interdependent. We use an N ×N-dimensional real matrix A with non-negative entries to model the influence of node i on node j, i, j ∈ N . The diagonal entries in A represent the strength of internal risk evolution, and the off-diagonal entries capture the risk influence magnitude between nodes [21,41]. For convenience, the risk profile of the network is denoted by Y t = [Y 1 t ,Y 2 t , · · · ,Y N t ]. The dynamics of the risk profile describes the evolution of the systemic risk of the whole network. To manage the risk profile, the risk manager can apply effort continuously over the time period [0, T ]. Specifically, at every time t, t ∈ [0, T ], the risk manager can spend effort E t ∈ E ⊆ R N + on the nodes that mitigates the systemic cyber risk, where E is a compact set. As fore-mentioned, the effort can be measured by the amount of time and effectiveness of the risk manager spent on monitoring the cyberspace of the enterprise network. The amount of reduced risk is monotonically increasing with the allocated effort E t [40]. This fact is reflected by many security practices, e.g., frequent scanning and analyzing the log files as well as timely patching the software can reduce the probability of successful cyber compromise by the adversary. Another critical factor to be considered is that the cyber risk faces uncertainties due to the randomness in the cyber network, e.g., the biased assessment and measurement of The principal designs contract: and suggested effort (Period 1). The agent decides whether to accept the contract or not (Period 2). Based on the cyber risk , the agent is dynamically remunerated with for his effort according to the agreed contract (Period 4). After finishing the task, the agent is further remunerated with (Period 5). Time line Contracting stage Execution stage If the agent accepts the contract, he spends effort in the cyber risk management (Period 3). Fig. 3 Timeline of the dynamic contract design for systemic cyber risk management. risk losses and under-modelling of random cyber threats [39]. Similar to [15], we use an N-dimensional standard Brownian motion B t which is defined on the complete probability space (Ω , F , P) to model the risk uncertainties on nodes. For clarity, Fig. 2 depicts an example of cyber risk management of the enterprise network containing two interdependent nodes. Each node stands for a subnetwork with its own system configuration, and the adversary can target different assets, e.g., application servers and workstations. The risk manager applies efforts E 1 t and E 2 t to node 1 and node 2 continuously to reduce the cyber risks Y 1 t and Y 2 t , respectively. The interdependency between two nodes is captured by the factor A 12 = A 21 . , }, [0, ], { t T c t T p  , [0, ], t t E T  t Y t p T c In sum, we focus on a model of systemic cyber risk evolution described by the following stochastic differential equation (SDE): dY t = AY t dt − E t dt + Σ t (Y t )dB t , Y 0 = y 0 ,(1) where y 0 ∈ R N + is a known positive vector denoting the initial systemic risk. Let D N×N + denote the space of diagonal real matrices with positive elements. Then, Σ t : R N → D N×N + captures the volatility of cyber risks in the network. Here, the diffusion coefficient Σ t (Y t ) indicates that the magnitude of uncertainty can be related to the dynamic risk of each node. We assume that the entries in Σ t (Y t ) are bounded, satisfying T 0 Σ t (Y t )1 N 2 dt ≤ C 1 almost surely, where C 1 is a positive constant, · denotes the standard Euclidean norm, and 1 N is an N-dimensional vector with all ones. Furthermore, the risk manager's effort E t satisfies the condition T 0 |E t |dt ≤ C 2 almost surely, where C 2 is a positive constant. Since the manager can apply effort to every node through E t , the systemic risk level Y t is fully manageable in the sense that more effort on each node reduces its cyber risk more significantly. Note that the model in (1) captures the characteristics of systemic cyber risks of enterprise network, and it is also adopted in various others' risk management scenarios inluding cyber-physical industrial control systems [53] and financial networks [29]. As shown in Fig. 3, the dynamic contract design for cyber risk management can be broken into two stages, namely the contracting stage and the execution stage. In the contracting stage, the principal first provides a dynamic contract that specifies the payment rules for the risk management to the agent and suggested/anticipated effort. Then, the agent chooses to accept the contract or not based on the provided benefits. If the agent accepts, then at the execution stage he needs to determine the adopted effort E t to reduce the systemic cyber risk. During the task, the principal observes the dynamic risk outcome Y t and pays p t ∈ P ⊆ R + compensation to the agent according to the agreed contract, where P is a compact set. After completing the task, the agent also receives a terminal payment c T ∈ R + which finalizes the contract. Therefore, the principal needs to decide on the payment process {p t } 0≤t≤T as well as the final compensation c T by observing the systemic risks. Note that the effort level E t , t ∈ [0, T ], is hidden information of the agent, which corresponds to the hidden-action scenario, or moral hazard, in contract theory. This feature a reflection of the fact that the principal (asset owner) of the enterprise network cares about the cyber risk outcome Y t rather than the implicit effort E t adopted by the risk manager. Furthermore, we denote the principal's information set by Y t , representing the augmented filtration generated by {Y s } 0≤s≤t . The agent's information set is denoted by A t , including {Y s } 0≤s≤t and {B s } 0≤s≤t . Note that for the agent, knowing {Y s } 0≤s≤t or {B s } 0≤s≤t is equivalent as he can determine one based on the other using also his effort process {E s } 0≤s≤t . Specifically, at time t, the principal's knowledge includes only the path of Y s , 0 ≤ s ≤ t. In comparison, the agent can observe every term in the system, including the principal's information as well as the path of B s , 0 ≤ s ≤ t. The principal observes risk outcome Y t , and his goal is to reduce the systemic risk by providing incentives to the manager. Therefore, the principal has no direct control of the systemic risk, and the difficulty he faces is in designing an efficient remuneration scheme based only on the limited observable information. Next, we rewrite the Y T -measurable terminal payment as c T = T 0 dc t + c 0 , to facilitate the contract analysis, where c t has an interpretation of cumulative payment during [0,t], and c 0 is a constant to be determined. Note that c 0 is a virtual initial payment and the agent receives it not at initial time 0, but rather at the terminal time T which is captured by the term c T . The evolution of the aggregated equivalent Y tmeasurable financial income process M t of the cyber risk manager can be described by dM t = dc t + p t dt.(2) The cyber risk manager's cost function is: J A ({E t } 0≤t≤T ; {p t } 0≤t≤T , c T ) = E T 0 e −rt f A (t, p t , E t )dt + e −rT h A (M T ),(3) where E is the expectation operator, r ∈ R + is a discount factor, f A : [0, T ] × R + × E → R is the running cost, and h A : R + → R − is the terminal cost. The function f A is (implicitly) composed of two terms: the cost of spending effort E t in risk management, and the received compensation p t from the principal. Note that the final compensation c T is incorporated into h A (M T ). Assumptions we make on the two additive terms of the cost functions are as follows. Assumption 1 The running cost function f A (t, p t , E t ) is uniformly continuous and differentiable in p t and E t . Further, it is monotonically decreasing in p t , and monotonically increasing and strictly convex in E t . The terminal cost function h A (M T ) is a continuously differentiable, convex, and monotonic decreasing function. The principal's cost function, on the other hand, is specified as: J P ({p t } 0≤t≤T , c T ) = E T 0 e −rt f P (t,Y t , p t )dt + e −rT (c T + h P (Y T )) ,(4) where f P : [0, T ] × R N × P → R is the running cost, and h P : R N → R denotes the terminal cost. The function f P captures the instantaneous cost of dynamic systemic risk and the payment to the agent. Assumption 2 The running cost for the principal, f P (t,Y t , p t ), is uniformly continuous and differentiable in Y t and p t . Further, it is monotonically increasing in p t and Y t . The terminal cost for the principal, h P (Y T ), is a continuously differentiable and monotonic increasing function. Dynamic Principal-Agent Model In cyber risk management, the principal contracts with the agent over [0, T ]. For a given contract, the risk manager is strategic in minimizing the net cost. This rational behavior can be captured by the following definition. Definition 1 (Incentive Compatibility) Under a given payment process {p t } 0≤t≤T and terminal compensation c T of the principal, the effort trajectory {E * t } 0≤t≤T of the agent is incentive compatible (IC) if it optimizes the cost function (3), i.e., J A ({E * t } 0≤t≤T ; {p t } 0≤t≤T , c T ) ≤ J A ({E t } 0≤t≤T ; {p t } 0≤t≤T , c T ) , ∀E t ∈ E , t ∈ [0, T ].(5) The asset owner needs to provide sufficient incentives for the agent to fulfill the task of risk management, and this fact is captured through individual rationality as follows. Definition 2 (Individual Rationality) The agent's policy is individually rational (IR) if the effort trajectory {E * t } 0≤t≤T leads to satisfaction of J A ({E * t } 0≤t≤T ; {p t } 0≤t≤T , c T ) = inf E t ∈E J A ({E t } 0≤t≤T ; {p t } 0≤t≤T , c T ) ≤ J A ,(6) where J A is a predetermined non-positive constant. Note that the non-positiveness of J A ensures the profitability of risk manager by fulfilling the risk management tasks. We next provide precise formulations of the problems faced by the agent and the principal. Under a contract {{p t } 0≤t≤T , c T }, the agent minimizes his total cost by solving the following problem: (O − A) : min E t ∈E , t∈[0,T ] J A ({E t } 0≤t≤T ; {p t } 0≤t≤T , c T ) subject to the stochastic dynamics (1), and the payment process (2). By taking into account the IC and IR constraints, the principal addresses the following optimization problem: (O − P) : min p t ∈P, t∈[0,T ], c T J P ({p t } 0≤t≤T , c T ) subject to the stochastic dynamics (1), IC (5), and IR (6). Note that the designed contract terms {p t } 0≤t≤T and c T should adapt to the information available to the principal in view of the underlying incomplete information. Denote the solution to (O − P) by {p * t } 0≤t≤T and c * T . We present the solution concept of the formulated problem as follows. Definition 3 (Optimal Dynamic Mechanism Design (ODMD)) The ODMD consists of the contract {{p * t } 0≤t≤T , c * T } as well as the effort process {E * t } 0≤t≤T that solve the problems (O − P) and (O − A), respectively. In addition, the compensation processes p * t and c * T are adapted to Y t and Y T , respectively, and the risk manager's effort E * t is adapted to A t . Remark: ODMD captures the bi-level interdependent decision making of the principal and the agent, which is a Stackelberg differential game with a nonstandard information structure. Since the principal (leader) delegates the control task to the agent (follower) but cannot observe his adopted action, ODMD features the limited nature of the principal's information. Due to the hidden effort of the risk manager, (O − P) is not a classical stochastic optimal control problem. Specifically, the principal only observes the cyber risk outcome rather than the effort which has to be incentivized. To address this challenge brought about by the presence of asymmetric information, we adopt a systematic approach to design an incentive compatible and optimal mechanism. Overview of the Methodology We present an overview of the steps involved in our derivation, with details worked out in the following sections. The principal first estimates the risk manager's effort based on the systemic risk output (estimation phase), and then verifies that the estimated effort is incentive compatible (verification phase), and finally designs an optimal compensation scheme under the incentive compatible estimator (control phase). To address the challenge, our goal is to transform the problem using variables that adapt to the principal's information set. To this end, the principal first assumes that the agent behaves optimally with effort level E * Analysis of Risk Manager's Incentives We first provide a form of the terminal payment contract term and then focus on deriving an incentive compatible estimator of the cyber risk manager's effort. Terminal Payment Analysis We first present the following result on the IR constraint. Lemma 1 The IR constraint holds as an equality, i.e., J A ({E * t } 0≤t≤T ; {p t } 0≤t≤T , c T ) = J A .(7) Proof If J A ({E * t } 0≤t≤T ; {p t } 0≤t≤T , c T ) < J A , the designed contract is not optimal as the principal can further reduce his cost by paying less to the agent. Next, we first express the agent's cost under the principal's information set Y t as well as using the property that the agent chooses an optimal E * t , and then use the principal's estimation about the agent's cost to characterize the cumulative payment process. We introduce a new variable W t representing the expected future cost of the agent anticipated by the principal as follows: W t = E T t e −r(s−t) f A s, p s , E * s )ds + e −r(T −t) h A (M T ) Y t .(8) Note that W t is evaluated under the information available to the principal at time t. Thus, the total expected cost of the agent under the information Y t can be expressed as U t = E T 0 e −rt f A t, p t , E t dt + e −rT h A (M T ) Y t , E t = E * t = t 0 e −rs f A s, p s , E * s ds + e −rt W t .(9) We further have conditions U 0 = W 0 = J A and W T = h A (M T ). The effort E t = E * t indicates that the agent behaves optimally under a given contract. Proposition 1 The total expected cost of the agent, U t , is a martingale under Y t . In addition, there exists an N-dimensional progressively measureable process ζ t such that dU t = e −rt ζ T t (dY t − AY t dt + E * t dt) ,(10) where T denotes the transpose operator. Proof First, we have E[U t |Y τ ] =E τ 0 e −rs f A (s, p s , E * s )ds + e −rτ W τ Y τ + E t τ e −rs f A (s, p s , E * s )ds + e −rt W t − e −rτ W τ Y τ =U τ + E t τ e −rs f A (s, p s , E * s )ds + e −rt W t Y τ − e −rτ W τ .(11) Dynamic Contract Design for Systemic Cyber Risk Management Then, using (8), we obtain E t τ e −rs f A (s, p s , E * s )ds + e −rt W t Y τ = E T τ e −rs f A (s, p s , E * s )ds + e −rT h A (M T ) Y τ = e −rτ W τ .(12) Hence, E[U t |Y τ ] = U τ , and U t is a Y t -measurable martingale. Using martingale representation theorem [36] yields (10). Based on Proposition 1, we can subsequently obtain the following lemma which facilitates design of the terminal payment term design in the optimal contract. Lemma 2 The aggregate equivalent income process M t evolves according to: dM t = rh A (M t ) h A (M t ) dt − f A (t, p t , E * t ) h A (M t ) dt + 1 h A (M t ) ζ T t (dY t − AY t dt + E * t dt) − 1 2 h A (M t ) h A (M t ) ζ T t Σ t (Y t )Σ t (Y t ) T ζ t h 2 A (M t ) dt.(13) Proof By substituting (10) into (9), we obtain dU t = e −rt f A t, p t , E * t dt − re −rt W t dt + e −rt dW t , ⇒ dW t = rW t dt − f A t, p t , E * t dt + ζ T t (dY t − AY t dt + E * t dt) .(14) Since W T = h A (M T ), we adopt the form W t = h A (M t ) and aim to characterize the contract that yields this form. Then, we have (14) indicates that J A = h A (M 0 ) = h A (c 0 ). Further,h A (M t )dM t + 1 2 h A (M t )χ 2 t dt =rh A (M t )dt − f A t, p t , E * t dt + ζ T t (dY t − AY t dt + E * t dt) ,(15) where χ t is the volatility of process M t . Matching the volatility terms in (15) gives h 2 A (M t )χ 2 t = ζ T t Σ t (Y t )Σ t (Y t ) T ζ t . Then, (15) yields the result. Remark: Note that (10) includes information on the cyber risk dynamics (1). Thus, (13) can be seen as a modified stochastic dynamic system of the agent with M t as a new state variable. In addition, ζ t can be interpreted as the principal's control over the agent's revenue. Another point to be highlighted is the role of p t in (13). Here, p t is not optimal yet and its value needs to be further determined by the principal. Currently, we can view p t as an exogenous variable that enters the constructed dynamic contract form (13). In addition, the feedback structure of the dynamic contract on Y t is reflected by the cumulative payment term c t shown later in Lemma 3. Interpretation of Dynamic Contract: The dynamic contract determines the risk manager's revenue in (13), which includes four separate terms. The first term, rh A (M t ) h A (M t ) dt, indicates that the risk manager's payoff should be increased to compensate the discounted future revenue. The second term, − f A (t,p t ,E * t ) h A (M t ) dt, is an offset of the direct cost of agent's effort. The third part, 1 h A (M t ) ζ T t (dY t − AY t dt + E * t dt) , is an incentive term, which captures the agent's benefit from spending effort in risk management. Here, the agent's real effort enters into the Y t term. The last one, − 1 2 h A (M t ) h A (M t ) ζ T t Σ t (Y t )Σ t (Y t ) T ζ t h 2 A (M t ) dt, is a risk compensation term (the manager is risk-averse), capturing the fact that the risk manager faces uncertainties in the performance outcome due to the Brownian motion. For completeness, we present the cumulative payment process c t in the following lemma. Lemma 3 The cumulative payment process c t evolves according to: dc t = rh A (M t ) h A (M t ) dt − f A (t, p t , E * t ) h A (M t ) dt + 1 h A (M t ) ζ T t (dY t − AY t dt + E * t dt) − 1 2 h A (M t ) h A (M t ) ζ T t Σ t (Y t )Σ t (Y t ) T ζ t h 2 A (M t ) dt − p t dt.(16) Proof The result can be directly obtained from (2) and Lemma 2. Lemma 3 characterizes the cumulative payment process c t with initial value c 0 given by h A (c 0 ) = J A . We focus on the class of contracts in (16), and aim to determine the optimal variables (ζ t and p t ) to minimize the principal's cost. Note that (16) is adapted to the principal's information set Y t , since the principal observes M t and Y t , determines p t , ζ t , and anticipates E * t . In addition, this payment process is directly related to the actual effort that the agent adopts, captured by dY t . The variable ζ t can be further interpreted as the sensitivity (or gain) of contract payment to the risk difference under the agent's optimal and actual efforts. In addition, since W t = h A (M t ), based on (8), we obtain U t = E T 0 e −rt f A t, p t , E t dt + e −rT h A (M T ) A t = t 0 e −rs f A s, p s , E * s ds + e −rt h A (M t ),(17) where the conditional expectation on A t admits the same value as that on Y t . Proposition 1 indicates that U t is a martingale. Then, the expected value of e −rt h A (M t ) in (17) is zero which confirms the zero expected future cost of the agent. Incentive Analysis of Cyber Risk Manager Recall that the principal suggests an optimal effort process E * t by assuming that the agent behaves optimally. However, the agent can determine his actual effort E t that minimizes the cost J A based on A t which might not be the same as E * t that the principal suggests. Thus, the next important problem for the principal is to determine an incentive compatible contract. To achieve this goal, the principal determines the process ζ t and the payment p t strategically to control the agent's actual effort E t . Denote by V a (t, M t ) the agent's value function with terminal condition V a (T, M T ) = h A (M T ). The property of value function ensures that the risk management effort is optimal if it satisfies the following dynamic programming equation: e −rt V a (t, M t ) = min E t E { s t e −ru f A (u, p u , E u )du + e −rs V a (s, M s )}. Then, using (1), (2), and (16), the cyber risk manager's revenue can be expressed as: dM t = rh A (M t ) h A (M t ) dt − f A t, p t , E * t h A (M t ) dt + 1 h A (M t ) ζ T t (E * t − E t ) dt − 1 2 h A (M t ) h A (M t ) ζ T t Σ t (Y t )Σ t (Y t ) T ζ t h 2 A (M t ) dt + 1 h A (M t ) ζ T t Σ t (Y t )dB t .(18) We rewrite the risk manager's problem as follows: (O − A ) : min E t ∈E , t∈[0,T ] J A ({E t } 0≤t≤T ; {p t } 0≤t≤T , c T ) subject to the stochastic dynamics (18), and the payment process (2). The Hamilton-Jacobi-Bellman (HJB) equation associated with the stochastic optimal control problem (O − A ) is min E t 1 2 ∂ 2 V a ∂ M 2 t 1 h 2 A (M t ) ζ T t Σ t (Y t )Σ t (Y t ) T ζ t + ∂V a ∂ M t rh A (M t ) h A (M t ) − f A t, p t , E * t h A (M t ) + 1 h A (M t ) ζ T t (E * t − E t ) − 1 2 h A (M t ) h A (M t ) ζ T t Σ t (Y t )Σ t (Y t ) T ζ t h 2 A (M t ) + f A (t, p t , E t ) + ∂V a ∂t = rV a , V a (T, M T ) = h A (M T ).(19) Based on the candidate value function V a (t, M t ) = h A (M t ), the second-order condition of (19) is satisfied. Then, the optimal solution to (O − A ) is E o t = arg max E t ∂V a ∂ M t 1 h A (M t ) ζ T t E t − f A (t, p t , E t ) = arg max E t ζ T t E t − f A (t, p t , E t ).(20) For a given contract, E o t is the optimal effort of the agent. Then, when the anticipated effort E * t of the principal coincides with E o t , i.e., E * t = E o t , the provided contract is IC and E * t is implemented. The following theorem captures this result. Theorem 1 When the compensation process in the contract is specified by (16), then the IC constraint is satisfied, i.e., E * t is implemented as expected by the principal, if and only if the following condition holds: E * t = arg max E t ζ T t E t − f A (t, p t , E t ),(21) where ζ t is adapted to the information Y t available to the principal. Proof We verify that E * t is implemented by the agent. For an arbitrary process {E t } 0≤t≤T , we define a variablẽ U t = t 0 e −rs f A s, p s , E s ds + e −rt h A (M t ), where M t is given by (18). Note that the HJB equation associated with (O − A ) can also be written as 0 = min E t E dŨ t |A t . Then, we know that when E t = E * t , the drift term ofŨ t is positive and yieldsŨ t < E[Ũ T |A t ]. Hence, at time t, the expected total cost of the risk manager is greater thanŨ t . When E t = E * t , we have E dŨ t |A t = 0, and thusŨ t = E[Ũ T |A t ]. This verifies that E * t is the incentive compatible optimal decision of the risk manager such that his total expected cost is achieved at the lower bound. Based on Theorem 1, the principal can indirectly manipulate the implemented effort of the agent by determining the variables ζ t and p t jointly. Hence, under (21), the suggested effort E * t is incentive compatible. Remark: From (21), we can see that the risk manager's behavior is strategically neutral. Specifically, at time t, the risk manager decides on the optimal effort E * t based only on the current cost (term f A (t, p t , E t )) and benefit (term ζ T t E t ) instead of future-looking variables. This neutral behavior is consistent with the fact that a larger current effort does not induce a higher payoff for the agent after time t, since as shown in (17), the expected future cost over time (t, T ] is zero due to the martingale property. The Principal's Problem: Optimal Dynamic Systemic Cyber Risk Management Our next goal is to characterize the dynamic contracts designed by the principal. Furthermore, we present a separation principle and explicit solutions to an LQ case in this section. Rational Controllability The controllability of the cyber risk is critical to the principal. To account for the incentives in the management of risk, we have the following definition. Definition 4 (Rational Controllability) The dynamic systemic cyber risk is rationally controllable if the principal can provide incentives {p t } 0≤t≤T and c T such that the risk manager's effort {E t } 0≤t≤T coincides with the one suggested by the principal. In ODMD, the rational controllability indicates that under {{p * t } 0≤t≤T , c * T }, the best-response behavior {E * t } 0≤t≤T of the agent is the same as the principal's predicted effort. The unique feature of rational controllability is that the principal cannot control the cyber risk directly but can rely on other terms to infer the rational behavior of the agent, which further influences the applied effort in risk management. Corollary 1 later captures this result. Stochastic Optimal Control Reformulation Knowing that the cyber risk manager behaves strategically, the principal aims to implement E * t and thus (16) becomes dc t = rh A (M t ) h A (M t ) dt − f A t, p t , E * t h A (M t ) dt − 1 2 h A (M t ) h A (M t ) ζ T t Σ t (Y t )Σ t (Y t ) T ζ t h 2 A (M t ) dt − p t dt + 1 h A (M t ) ζ T t Σ t (Y t )dB t .(22) Instead of dealing with the complex revenue dynamics (18) of the principal, we deal with its equivalent counterpart dh t shown in Theorem 2 below, which is much simpler. We reformulate the principal's problem as a standard stochastic optimal control problem as follows. Theorem 2 The principal's problem is reformulated as a stochastic optimal control problem as follows: (O − P ) : min p t ∈P, ζ t E T 0 e −rt f P (t,Y t , p t ) − e −r(T −t) p t dt + e −rT h P (Y T ) + h −1 A (h T ) such that dY t = AY t dt − E * t dt + Σ t (Y t )dB t , Y 0 = y 0 , dh t = rh t dt − f A (t, p t , E * t )dt + ζ T t Σ t (Y t )dB t , h 0 = J A , E * t = arg max E t ζ T t E t − f A (t, p t , E t ). Proof Recall that the expected cost of the cyber risk manager is equal to W t = h A (M t ). Then, under the optimal risk management effort and denoting h t = h A (M t ), we obtain dh t = rh t dt − f A (t, p t , E * t )dt + ζ T t Σ t (Y t )dB t , h 0 = J A . In addition, based on dc t = dM t − p t dt, we have c T = M T − T 0 p t dt. Since M T = h −1 A (h T ), we have e −rT c T = e −rT h −1 A (h T ) − e −rt T 0 e −r(T −t) p t dt. Thus, the cost function of the principal can be rewritten as E T 0 e −rt f P (t,Y t , p t ) − e −r(T −t) p t dt + e −rT h P Y T ) + h −1 A (h T ) , which yields the result. In the investigated incomplete information situations, the principal preserves the indirect controllability of systemic risk Y t by estimating the agent's effort E * t as well as specifying the contract terms p t , c T and process ζ t . Corollary 1 By providing incentives {{p t } 0≤t≤T , c T } and specifying process {ζ t } 0≤t≤T , the dynamic systemic cyber risk is rationally controllable, and the incentive compatible effort follows (21). The optimal {p * t } 0≤t≤T and {ζ * t } 0≤t≤T can be obtained from Theorem 2. Proof The result directly follows from Theorems 1 and 2. Remark: Theorem 2 presents solution to a standard optimal control problem for the principal, whose the existence and uniqueness have been well studied [51]. With f P , h P , f A , and h A satisfying the conditions in Assumptions 1 and 2, and the corresponding coefficients in the functions well selected ensuring the feasibility of (O − P ), the control problem can be solved efficiently by numerical methods [38]. Therefore, the ODMD for the systemic risk management problem, i.e., E * t , p * t , and c * T , can be determined from (21), (22) and Theorem 2, respectively. Separation Principle We next present a separation principle for the asset owner in determining the compensation p t and the auxiliary parameter ζ t . First, we make assumptions on the separability of the cost functions. (S1): The agent's running cost can generally be separated into two parts, including the effort and payment. Accordingly, we take f A (t, p t , E t ) to be in the form f A (t, p t , E t ) = f A,E (E t ) − f A,p (p t ),(23) where f A,E : E → R + is monotonically increasing, continuously differentiable and strictly convex, i.e., f A,E (E t ) > 0 and f A,E (E t ) > 0, and f A,p : P → R + . Then, the constraint E * t = arg max E t ζ T t E t − f A (t, p t , E t ) can be simplified to E * t = f −1 A,E (ζ t ).(24) (S2): We also assume that the principal's running cost takes the form f P (t,Y t , p t ) = f P,Y (Y t ) + f P,p (p t ),(25) where f P,Y : R N → R and f P,p : P → R + are monotonically increasing and continuously differentiable. The inverse function h −1 A plays a role in the principal's objective. We further have the following assumption. (L1): The agent's terminal cost function h A is linear, i.e., h A (M T ) = γM T , where γ < 0. Then, we have the following separation principle. Theorem 3 Under conditions (S1), (S2), and (L1), the principal's problem (O − P ) can be separated into two subproblems with respect to the decision variables ζ t and p t as: (SP1) : min ζ t E T 0 e −rt f P,Y (Y t ) − 1 γ f A,E f −1 A,E (ζ t ) dt + e −rT h P (Y T ) + 1 γ T 0 e −rt ζ T t Σ t (Y t )dB t such that dY t = AY t dt − f −1 A,E (ζ t )dt + Σ t (Y t )dB t , Y 0 = y 0 . (SP2) : min p t ∈P T 0 e −rt f P,p (p t ) − e −r(T −t) p t + 1 γ f A,p (p t ) dt. Proof For the constraint dh t = rh t dt − f A,E ( f −1 A,E (ζ t ))dt + f A,p (p t )dt + ζ T t Σ t (Y t )dB t , we obtain h t = e rt h 0 − t 0 e r(t−s) [ f A,E f −1 A,E (ζ s ) − f A,p (p s )]ds+ t 0 e r(t−s) ζ T s Σ s (Y s )dB s . Thus, the principal's problem can be rewritten as min p t ∈P,ζ t E T 0 e −rt f P,Y (Y t ) + f P,p (p t ) − e −r(T −t) p t dt + e −rT h P (Y T ) + h −1 A e rT J A − T 0 e r(T −s) f A,E f −1 A,E (ζ s ) ds + T 0 e r(T −s) f A,p (p s )ds + T 0 e r(T −s) ζ T s Σ s (Y s )dB s such that dY t = AY t dt − f −1 A,E (ζ t )dt + Σ t (Y t )dB t , Y 0 = y 0 . Then, the decomposition of the problem follows naturally. Remark: ζ t can be regarded as an estimation variable since it determines the anticipated effort E * t . The payment p t is a control variable that manipulates the risk manager's incentives and is determined at the control phase. Under appropriate conditions, these two estimation and control variables can be designed in a separate manner, yielding a separation principle in dynamic contract design for systemic risk management. To obtain more insights, we next focus on a class of models where the value function of the principal and the ODMD can be explicitly characterized. ODMD in LQ Setting In the LQ setting, the cost functions take forms as f A,E (E t ) = 1 2 E T t R t E t , and f A,p (p t ) = δ A p t , where R t is a positive-definite N × N-dimensional symmetric matrix and δ A is a positive constant. Then we obtain E * t = f −1 A,E (ζ t ) = R −1 t ζ t .(26) Further, we consider h P (Y T ) = ρ T Y T , where ρ ∈ R N + maps the cyber risks to monetary loss, and f P (t,Y t , p t ) = ρ T Y t + δ P p t , where δ P is a positive constant. In addition, h A (M T ) = −M T and Σ t (Y t ) = D t · diag(Y t ), where D t ∈ R N×N and 'diag' is a diagonal operator. The principal's problem becomes: min p t ∈P,ζ t E T 0 e −rt (ρ T Y t + δ P p t − e −r(T −t) p t )dt + e −rT (ρ T Y T − h T ) such that dY t = (AY t − R −1 t ζ t )dt + D t · diag(Y t )dB t , Y 0 = y 0 , dh t = rh t − 1 2 ζ T t R −1 t ζ t + δ A p t dt + ζ T t Σ t (Y t )dB t , h 0 = J A . The principal aims to maximize h T , which is equivalent to minimizing the agent's total revenue based on the relationship h T = −M T . The principal also considers the agent's participation constraint by setting h 0 = W 0 = J A , ensuring that the cyber risk manager has sufficient incentive to fulfill the task. Since e −rT h T = h 0 − T 0 e −rs 1 2 ζ T s R −1 s ζ t − δ A p s ds + T 0 e −rs ζ T s D s ·diag(Y s )dB s , the principal's problem can be rewritten as: min p t ∈P,ζ t E T 0 e −rt ρ T Y t + (δ P − δ A )p t − e −r(T −t) p t + 1 2 ζ T t R −1 t ζ t dt + e −rT ρ T Y T − J A such that dY t = (AY t − R −1 t ζ t )dt + D t · diag(Y t )dB t , Y 0 = y 0 . According to Theorem 3, the separation principle holds in the LQ case. To determine the optimal p t , we solve the following unconstrained optimization problem: min p t ∈P T 0 e −rt (δ P − δ A − e −r(T −t) )p t dt. Depending on the values of parameters δ P and δ A , we obtain the following results. If δ P − δ A ≥ 1, there is no intermediate payment, i.e., p t = 0, ∀t ∈ [0, T ]. In this regime, the principal has a higher valuation on the monetary payment than the agent does. In other words, the agent is relatively hard to be incentivized to do the risk management. When δ P − δ A ≤ 0, i.e., the principal focuses more on the cyber risk deduction rather than the expenditure on incentivizing the agent, the optimal p t is positively unbounded. However, in this regime, the terminal payment c T is negatively unbounded based on (22). This contract corresponds to the scenario where the risk manager receives a large amount of intermediate payment during the task while returning it to the principal after finishing the task which is not practical. Under 0 < δ P − δ A < 1, the intermediate compensation is either 0 or unbounded depending on the time index. Hence, to design a practical contract, we focus on the regime in which the intermediate payment is zero, and the risk manager receives a positive terminal payment c T . To obtain the optimal {ζ * t } 0≤t≤T , we assume that the process ζ t , t ∈ [0, T ], is non-anticipative, which can be verified later after obtaining the solution ζ * t . Then, the problem can be further simplified to: min ζ t E T 0 e −rt ρ T Y t + 1 2 ζ T t R −1 t ζ t dt + e −rT ρ T Y T − J A such that dY t = (AY t − R −1 t ζ t )dt + D t · diag(Y t )dB t , Y 0 = y 0 . The following theorem provides the optimal solution ζ * t . Theorem 4 In the LQ case, the optimal solution to the principal's problem is given by ζ * t = K t ,(27) where K t satisfies, and is the unique solution tȯ K t + (A − rI) T K t + ρ = 0, K T = ρ.(28) Furthermore, the minimum cost of the principal is given by J * p = K T 0 y 0 + m 0 − J A ,(29) where m 0 is obtained uniquely froṁ m t − rm t − 1 2 K T t R −1 t K t = 0, m T = 0.(30) with c 0 = −J A > 0, and K t is given by (28). The intermediate payment p t degenerates to zero, and the anticipated effort of the agent under the optimal contract is E * t = R −1 t K t . Proof The result follows from Theorems 1,4,and (22). Remark: As shown in Lemma 4, the cyber risk volatility Σ t (Y t ) does not impact the optimal dynamic contract design, since the principal's expected cost is linear in the systemic risk Y t . When one of the functions f p , h A and h p is not linear, the volatility Σ t (Y t ) will play a role in the contract design in solving the problem presented in Theorem 2. Even though the optimal dynamic contract does not depend on the cyber risk volatility in the LQ case, the risk volatility influences the real compensation during contract implementation. Corollary 2 The terminal compensation of risk manager has a larger variance when there are more complex interdependencies of risk uncertainties between nodes. Corollary 2 will further be illustrated through case studies in Section 6. Benchmark Scenario: Systemic Cyber Risk Management under Full Information In the full-information case, the principal observes the efforts that the cyber risk manager implements. We first solve the team problem in which the agent cooperates with the principal. To that end, the principal's cost under the team optimal solution is the best that he can achieve. Then, we aim to design a dynamic contract mechanism under which the agent will adopt the same policy as the team optimal one. In the cooperative case, the contract only needs to guarantee the participation constraint. Then, the principal's problem can be formulated as follows: (O − B) : min p t ∈P,c T ,E t ∈E E T 0 e −rt f P (t,Y t , p t )dt + e −rT (c T + h P (Y T )) such that dY t = AY t dt − E t dt + Σ t (Y t )dB t , Y 0 = y 0 , J A ({E * t } 0≤t≤T ; {p t } 0≤t≤T , c T ) = J A . As in the asymmetric information scenario, it is more convenient to deal with the dynamics of the cyber risk manager's expected cost. By designing the contract, the principal only needs to ensure the participation of the agent. Then, the principal's problem can be rewritten as follows: (O − B ) : min p t ∈P,ζ t ,E t ∈E E T 0 e −rt f P (t,Y t , p t ) − e −r(T −t) p t dt + e −rT h P Y T + h −1 A (h T ) such that dY t =AY t dt − E t dt + Σ t (Y t )dB t , Y 0 = y 0 , dh t =rh t dt − f A t, p t , E t dt + ζ T t Σ t (Y t )dB t , h 0 = J A . With the full observation of Y t and E t , ζ t can be chosen freely, and E t can be seen as a control variable of the principal. Note that the IC constraint (21) does not enter into (O − B ). In addition, the equivalent terminal payment process c t admits the same form as (22). (O − B ) is a standard stochastic optimal control problem which can be solved efficiently. To quantify the efficiency of dynamic contract designed in Section 4, we have the following definition. (O − B) by {{p b t } 0≤t≤T , c b T , {E b t } 0≤t≤T }. Then, the information rent is given by I R = J P ({p * t } 0≤t≤T , c * T ) − J P ({p b t } 0≤t≤T , c b T ).(37) Intuitively, information rent quantifies the difference between the principal's costs with optimal mechanisms designed under incomplete and full information. We have following result on information rent. Corollary 3 The optimal cost of the principal under full information is no larger than the one under asymmetric information. Hence, I R ≥ 0. Proof Comparing with the optimal {E * t } 0≤t≤T in (O − P ), the implemented effort {E b t } 0≤t≤T in (O − B ) does not depend on the variables ζ t and p t . Thus, (O − B ) admits a larger feasible solution space, which yields the result. LQ Setting: Certainty Equivalence Principle To further characterize the optimal contracts under full information and quantify the information rent, we investigate a class of special scenarios. Specifically, we take the functions to have the same forms as in Section 4.4. The principal's problem can then be written as min p t ∈P,E t ∈E E T 0 e −rt ρ T Y t + (δ P − δ A )p t − e −r(T −t) p t + 1 2 E T t R t E t dt + e −rT ρ T Y T − J A such that dY t = (AY t − E t )dt + D t · diag(Y t )dB t , Y 0 = y 0 . Note that ζ t does not appear in the optimization problem. However, ζ t enters the designed contract (22) through the term −ζ T t Σ t (Y t )dB t . In the long term contracting when T is relatively large, the expected value of −ζ T t Σ t (Y t )dB t is zero which is irrelevant with ζ t . Hence, the principal can set ζ t = 0 to reduce the contract complexity. Similar to the analysis in Section 4.4, we focus on the regime where the intermediate payment flow p t is zero, to avoid the unrealistic situation of negative terminal payment. We obtain the following lemma characterizing the certainty equivalence principle. Lemma 5 In the LQ settings, I R = 0 which reveals the certainty equivalence principle, i.e., the designed optimal contracts under the incomplete information are as efficient as those designed under complete information. The optimal solution of the agent is achieved at E o t = arg min E t − Γ T t E t + 1 2 E T t R t E t , which yields E o t = R −1 t Γ t . Based on Lemma 6, we choose Γ t = K t , and thus the agent implements the team optimal solution E b t . Further, (40) degenerates to the one in (38). Remark: In the LQ setting under full information and incomplete information, the optimal contract and the manager's behavior do not relate to the risk volatility Σ t (Y t ) of the network. The reason is that the cost function of the principal is linear in the systemic risk Y t . Hence, the expectation of the risk volatility term is zero, and Σ t (Y t ) does not play a role in the optimal dynamic contract. This fact in turn corroborates the zero information rent in the LQ setting due to the removal of risk uncertainty. A more general class of scenarios satisfying the certainty equivalence principle that leads to zero information rent is summarized as follows. Corollary 4 When f P (t, φ , p t ), h p (φ ) and h A (φ ) are linear in the argument φ , then I R = 0, where the optimal contracts under the full information and incomplete information coincide. Proof The linearity of functions removes the effects of risk uncertainties on the performance of the principal and the agent which leads to a zero information rent. Case Studies We demonstrate, in this section, the optimal design principles of dynamic contracts for systemic cyber risk management of enterprise networks through examples. Specifically, we first utilize a case study with one node to show that the dynamic contracts can successfully mitigate the systemic risk in a long period of time. Then, we investigate an enterprise network with a set of interconnected nodes to reveal the network effects in systemic risk management through dynamic contracts and discover a distributed way of mitigating the systemic risks. One-Node System Case First, we consider a one-dimensional case in which the enterprise network contains only one node, i.e., Y t is a scalar. Therefore, the risk manager protects the system by directing the security resources to this node. Note that for the LQ setting, the coupled ODEs in Theorem 4 admit the unique solutions: K t = ρ A − r (A − r + 1)e (A−r)(T −t) − 1 , m t = K 2 t 2rR t e −r(T −t) − 1 .(41) Therefore, based on Lemma 4, the optimal effort of the risk manager is E * t = R −1 t ζ * t = ρ R t (A − r) (A − r + 1)e (A−r)(T −t) − 1 ,(42) and the optimal compensation becomes dc t = rc t − K 2 t 2R t + AK t Y t dt − K t dY t , c 0 = −J A .(43) If the risk manager accepts this optimal contract, then the principal's excepted minimum cost is equal to J * P = K T 0 y 0 + m 0 − J A . To illustrate the optimal mechanism design, we choose specific values for the parameters in Section 4.4: ρ = 5 k$/unit, r = 0.3, R t = 1.5 k$/unit 2 , T = 1 year, y 0 = 5 unit, and J A = −10 k$. Figure 4 shows the results for varying values of the parameter A. Note that a single node system with a larger A indicates that it is more vulnerable and harder to mitigate the cyber risk. From Fig. 4, we find that with a larger A, the system requires more effort from the risk manager to bring the cyber risk down to a relatively low level. In all cases, the effort decreases as time increases, and finally converges to a positive constant ρ R t . This phenomenon indicates that when the system risk is high, the agent should spend more effort in risk management. When the risk is reduced to a relatively low level and the system becomes secure, then less effort is preferable as the risk will not grow. In addition, the corresponding terminal compensation c T increases with the amount of effort spent. Network Case We next investigate cyber risk management over enterprise networks and characterize the interdependencies between nodes. The unique solutions to the ODEs in Theorem 4 are then as follows: K t = ρ (A − rI) T −1 (A − rI) T + I e (A−rI) T (T −t) − I ,(44)m t = K T t R −1 t K t 2r e −r(T −t) − 1 ,(45) The optimal effort of the risk manager is Figure 5 shows the results, where we denote by E i * t and Y i t the effort and the corresponding risk of node i, i = 1, 2, respectively. Similar to the single-node case, both the effort and systemic risk decrease over time. Specifically, the dynamic effort converges to R −1 t ρ which can be verified directly by the analytical expression. Comparing E 1 * t with E 2 * t , we find that the risk manager should spend more effort on the nodes which can heavily influence other nodes. Even though there is no risk influence from node 1 to node 2, the optimal (b) Systemic cyber risk (c) Cumulative payment Fig. 4 (a), (b), and (c) show the effort, the cyber risk and the terminal payment under the optimal contract. The terminal compensation c T increases with the spent effort of the risk manager. effort E 2 * t increases as the influence strength becomes larger from node 2 to node 1. This phenomenon is consistent with the idea of controlling the origin to constrain the propagation of cyber risks. Furthermore, the value of E 2 * t indicates that a higher network connectivity requires more effort to mitigate the systemic cyber risk. E * t = R −1 t ρ (A − We next investigate a 4-node system where the network structures are shown in Fig. 6. The system parameters are the same as those in the 2-node case except for the matrix A. The diagonal entries in A are all equal to 2 and the off-diagonal entries that correspond to a link are all equal to 0.2. Figure 7 shows the results under the optimal mechanism. The risk manager spends more effort on node 1 in cases 2 and 3 than in case 1, as the risk of node 1 can propagate to node 4 in the former two cases. Another key observation is that the amount of allocated effort on each node mainly depends on its risk influences on other nodes rather than on the exogenous risks (node's outer degree), yielding a self-accountable risk mitigation scheme. For example, even though node 4 impacts node 2 in case 3, the risk management efforts on node 2 are close in cases 2 and 3. A similar pattern can be seen on node 4 in cases 1 and 2. This observation provides a distributed method of risk management which reduces the complexity of decision-making by simplifying the network structures and classifying the nodes based on their outer degrees. By comparing three cases, we also conclude that more complex cyber interdependencies induce higher cost on the principal in the security investment. Note that in the above case studies, all variables were evaluated under the expectation with respect to the cyber risk uncertainty. As shown in Corollary 2, even though the expected compensation is independent of the network risk uncertainty, the actual compensation during contract implementation is influenced by the volatility term Σ t (Y t ). We present two scenarios in Fig. 8, where Fig. 8(a) and Fig. 8(b) are the com- pensation realizations under Σ t (Y t ) = I and Σ t (Y t ) = [1, 1, 0, 0; 0, 1, 1, 0; 0, 0, 1, 1; 1, 0, 0, 1], respectively. When the nodes' risks face more sources of uncertainties in Fig. 8(b), the corresponding payment exhibits a larger variance comparing with the one in Fig. 8(a), which is consistent with the result of Corollary 2. Conclusion In this paper, we have addressed the problem of dynamic systemic cyber risk management of enterprise networks, where the principal provides contractual incentives to the manager, which include the compensations of direct cost of effort and indirect cost from risk uncertainties. This has involved a stochastic Stackelberg differential game with asymmetric information in a principal-agent setting. Under the optimal incentive compatible scheme we have designed, the principal has rational controllability of the systemic risk where the suggested and adopted efforts coincide, and the risk manager's behavior is strategically neutral, depending only on the current net cost. Under mild conditions, we have obtained a separation principle where the effort estimation and the remuneration design can be separately achieved. We further have revealed a certainty equivalence principle for a class of dynamic mechanism design problems where the information rent is equal to zero. Through case studies, we have identified the network effects in the systemic risk management where the connectivity and node's outer degree play an important role in the decision making. Future work on this topic would consider cyber risk management of enterprise networks under Markov jump risk dynamics.
12,021
1908.06893
2967126358
With an increasing number of malicious attacks, the number of people and organizations falling prey to social engineering attacks is proliferating. Despite considerable research in mitigation systems, attackers continually improve their modus operandi by using sophisticated machine learning, natural language processing techniques with an intent to launch successful targeted attacks aimed at deceiving detection mechanisms as well as the victims. We propose a system for advanced email masquerading attacks using Natural Language Generation (NLG) techniques. Using legitimate as well as an influx of varying malicious content, the proposed deep learning system generates emails with malicious content, customized depending on the attacker's intent. The system leverages Recurrent Neural Networks (RNNs) for automated text generation. We also focus on the performance of the generated emails in defeating statistical detectors, and compare and analyze the emails using a proposed baseline.
Natural language generation techniques have been widely popular for synthesizing unique pieces of textual content. NLG techniques proposed by @cite_25 @cite_28 rely on templates pre-constructed for specific purposes. The fake email generation system in @cite_6 uses a set of manually constructed rules to pre-define the structure of the fake emails. Recent advancements in deep learning networks have paved the pathway for generating creative as well as objective textual content with the right amount of text data for training. RNN-based language models have been widely used to generate a wide range of genres like poetry @cite_22 @cite_4 , fake reviews @cite_18 , tweets @cite_13 , geographical information @cite_28 and many more.
{ "abstract": [ "Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on \"usefulness\" metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.", "", "", "Georeferenced data sets are often large and complex. Natural Language Generation (NLG) systems are beginning to emerge that generate texts from such data. One of the challenges these systems face is the generation of geographic descriptions referring to the location of events or patterns in the data. Based on our studies in the domain of meteorology we present a two staged approach to generating geographic descriptions. The first stage involves using domain knowledge based on the task context to select a frame of reference, and the second involves using constraints imposed by the end user to select values within a frame of reference. Because geographic concepts are inherently vague our approach does not guarantee a distinguishing description. Our evaluation studies show that NLG systems, because they can analyse input data exhaustively, can produce more fine-grained geographic descriptions that are more useful to end users than those generated by human experts.", "We focus on email-based attacks, a rich field with well-publicized consequences. We show how current Natural Language Generation (NLG) technology allows an attacker to generate masquerade attacks on scale, and study their effectiveness with a within-subjects study. We also gather insights on what parts of an email do users focus on and how users identify attacks in this realm, by planting signals and also by asking them for their reasoning. We find that: (i) 17 of participants could not identify any of the signals that were inserted in emails, and (ii) Participants were unable to perform better than random guessing on these attacks. The insights gathered and the tools and techniques employed could help defenders in: (i) implementing new, customized anti-phishing solutions for Internet users including training next-generation email filters that go beyond vanilla spam filters and capable of addressing masquerade, (ii) more effectively training and upgrading the skills of email users, and (iii) understanding the dynamics of this novel attack and its ability of tricking humans.", "Social media such as Twitter have become an important method of communication, with potential opportunities for NLG to facilitate the generation of social media content. We focus on the generation of indicative tweets that contain a link to an external web page. While it is natural and tempting to view the linked web page as the source text from which the tweet is generated in an extractive summarization setting, it is unclear to what extent actual indicative tweets behave like extractive summaries. We collect a corpus of indicative tweets with their associated articles and investigate to what extent they can be derived from the articles using extractive methods. We also consider the impact of the formality and genre of the article. Our results demonstrate the limits of viewing indicative tweet generation as extractive summarization, and point to the need for the development of a methodology for tweet generation that is sensitive to genre-specific issues.", "Figures Preface 1. Introduction 2. National Language Generation in practice 3. The architecture of a Natural Language Generation system 4. Document planning 5. Microplanning 6. Surface realisation 7. Beyond text generation Appendix References Index." ], "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_28", "@cite_6", "@cite_13", "@cite_25" ], "mid": [ "2951022568", "", "2563845258", "2094303736", "2598692538", "2251870912", "1645937837" ] }
Automated email Generation for Targeted Attacks using Natural Language
The continuous adversarial growth and learning has been one of the major challenges in the field of Cybersecurity. With the immense boom in usage and adaptation of the Internet, staggering numbers of individuals and organizations have fallen prey to targeted attacks like phishing and pharming. Such attacks result in digital identity theft causing personal and financial losses to unknowing victims. Over the past decade, researchers have proposed a wide variety of detection methods to counter such attacks (e.g., see (Verma and Hossain, 2013;Thakur and Verma, 2014;Verma and Dyer, 2015;Verma and Rai, 2015;Verma and Das, 2017), and references cited therein). However, wrongdoers have exploited cyber resources to launch newer and sophisticated attacks to evade machine and human supervision. Detection systems and algorithms are commonly trained on historical data and attack patterns. Innovative attack vectors can trick these pre-trained detection and classification techniques and cause harm to the victims. Email is a common attack vector used by phishers that can be embedded with poisonous links to malicious websites, malign attachments like malware executables, etc (Drake et al., 2004). Anti-Phishing Working Group (APWG) has identified a total of 121,860 unique phishing email reports in March 2017. In 2016, APWG received over 1,313,771 unique phishing complaints. According to sources in IRS Return Integrity Compliance Services, around 870 organizations had received W-2 based phishing scams in the first quarter of 2017, which has increased significantly from 100 organizations in 2016. And the phishing scenario keeps getting worse as attackers use more intelligent and sophisticated ways of scamming victims. Fraudulent emails targeted towards the victim may be constructed using a variety of techniques fine-tuned to create the perfect deception. While manually fine-tuning such emails guarantees a higher probability of a successful attack, it requires a considerable amount of time. Phishers are always looking for automated means for launching fast and effective attack vectors. Some of these techniques include bulk mailing or spamming, including action words and links in a phishing email, etc. But these can be easily classified as positive warnings owing to improved statistical detection models. Email masquerading is also a popular cyberattack technique where a phisher or scammer after gaining access to an individual's email inbox or outbox can study the nature/content of the emails sent or received by the target. He can then synthesize targeted malicious emails masqueraded as a benign email by incorporating features observed in the target's emails. The chances of such an attack being detected by an automated pre-trained classifier is reduced. The malicious email remain undetected, thereby increasing the chances of a successful attack. Current Natural Language Generation (NLG) techniques have allowed researchers to generate natural language text based on a given context. Highly sophisticated and trained NLG systems can involve text generation based on predefined grammar like the Dada Engine (Baki et al., 2017) or leverage deep learning neural networks like RNN (Yao et al., 2017) for generating text. Such an approach essentially facilitates the machine to learn a model that emulates the input to the system. The system can then be made to generate text that closely resembles the input structure and form. Such NLG systems can therefore become dangerous tools in the hands of phishers. Advanced deep learning neural networks (DNNs) can be effectively used to generate coherent sequences of text when trained on suitable textual content. Researchers have used such systems for generating textual content across a wide variety of genres -from tweets (Sidhaye and Cheung, 2015) to poetry (Ghazvininejad et al., 2016). Thus we can assume it is not long before phishers and spammers can use email datasets -legitimate and malicious -in conjunction with DNNs to generate deceptive malicious emails. By masquerading the properties of a legitimate email, such carefully crafted emails can deceive pre-trained email detectors, thus making people and organizations vulnerable to phishing scams. In this paper, we address the new class of attacks based on automated fake email generation. We start off by demonstrating the practical usage of DNNs for fake email generation and walk through a process of fine-tuning the system, varying a set of parameters that control the content and intent of the text. The key contributions of this paper are: 1. A study of the feasibility and effectiveness of deep learning techniques in email generation. 2. Demonstration of an automated system for generation of 'fake' targeted emails with a malicious intent. 3. Fine-tuning synthetic email content depending on training data -intent and content parameter tuning. 4. Comparison with a baseline -synthetic emails generated by Dada engine (Baki et al., 2017). 5. Detection of synthetic emails using a statistical detector and investigation of effectiveness in tricking an existing spam email classifier (built using SVM). Experimental Methodology The section has been divided into four subsections. We describe the nature and source of the training and evaluation data in Section 3.1. The pre-processing steps are demonstrated in Section 3.2. The system setup and experimental settings have been described in Section 3.3. Data description To best emulate a benign email, a text generator must learn the text representation in actual legitimate emails. Therefore, it is necessary to incorporate benign emails in training the model. However, as a successful attacker, our main aim is to create the perfect deceptive email -one which despite having malign components like poisoned links or attachments, looks legitimate enough to bypass statistical detectors and human supervision. Primarily, for the reasons stated above, we have used multiple email datasets, belonging to both legitimate and malicious classes, for training the system model and also in the quantitative evaluation and comparison steps. For our training model, we use a larger ratio of malicious emails compared to legitimate data (approximate ratio of benign to malicious is 1:4). Legitimate dataset. We use three sets of legitimate emails for modeling our legitimate content. The legitimate emails were primarily extracted from the outbox and inbox of real individuals. Thus the text contains a lot of named entities belonging to PERSON, LOC and ORGANIZATION types. The emails have been extracted from three different sources stated below: • 48 emails sent by Sarah Palin (Source 1) and 55 from Hillary Clinton (Source 2) obtained from the archives released in (The New York Times, 2011; WikiLeaks, 2016) respectively. • 500 emails from the Sent items folder of the employees from the Enron email corpus (Source 3) (Enron Corpus, 2015). Malicious dataset. The malicious dataset was difficult to acquire. We used two malicious sources of data mentioned below: • 197 Phishing emails collected by the second authorcalled Verma phish below. • 3392 Phishing emails from Jose Nazario's Phishing corpus 1 (Source 2) Evaluation dataset. We compared our system's output against a small set of automatically generated emails provided by the authors of (Baki et al., 2017). The provided set consists of 12 emails automatically generated using the Dada Engine and manually generated grammar rules. The set consists of 6 emails masquerading as Hillary Clinton emails and 6 emails masquerading as emails from Sarah Palin. Tables 1 and 2 describe some statistical details about the legitimate and malicious datasets used in this system. We define length (L) as the number of words in the body of an email. We define Vocabulary (V ) as the number of unique words in an email. A few observations from the datasets above: the malicious content is relatively more verbose than than the legitimate counterparts. Moreover, the size of the malicious data is comparatively higher compared to the legitimate content. Data Filtering and Preprocessing We considered some important steps for preprocessing the important textual content in the data. Below are the common preprocessing steps applied to the data: • Removal of special characters like @, #, $, % as well as common punctuations from the email body. • emails usually have other URLs or email IDs. These can pollute and confuse the learning model as to what are the more important words in the text. Therefore, we replaced the URLs and the email addresses with the <LINK> and <EID> tags respectively. • Replacement of named entities with the <NET> tag. We use Python NLTK NER for identification of the named entities. On close inspection of the training data, we found that the phishing emails had incoherent HTML content which can pollute the training model. Therefore, from the original data (in Table 2), we carefully filter out the emails that were not in English, and the ones that had all the text data was embedded in HTML. These emails usually had a lot of random character strings -thus the learning model could be polluted with such random text. Only the phishing emails in our datasets had such issues. Table 3 gives the details about the filtered phishing dataset. Experimental Setup We use a deep learning framework for the Natural Language Generation model. The system used for learning the email model is developed using Tensorflow 1.3.0 and Python 3.5. This section provides a background on a Recurrent Neural Network for text generation. Deep Neural Networks are complex models for computation with deeply connected networks of neurons to solve complicated machine learning tasks. Recurrent Neural Networks (RNNs) are a type of deep learning networks better suited for sequential data. RNNs can be used to learn character and word sequences from natural language text (used for training). The RNN system used in this paper is capable of generating text by varying levels of granularity, i.e. at the character level or word level. For our training and evaluation, we make use of Word-based RNNs since previous text generation systems (Xie et al., 2017), (Henderson et al., 2014) have generated coherent and readable content using word-level models. A comparison between Character-based and Wordbased LSTMs in (Xie et al., 2017) proved that for a sample of generated text sequence, word level models have lower perplexity than character level deep learners. This is because the character-based text generators suffer from spelling errors and incoherent text fragments. 3.3.1. RNN architecture Traditional language models like N-grams are limited by the history or the sequence of the textual content that these models are able to look back upon while training. However, RNNs are able to retain the long term information provided by some text sequence, making it work as a "memory"based model. However while building a model, RNNs are not the best performers when it comes to preserving long term dependencies. For this reason we use Long Short Term Memory architectures (LSTM) networks which are able to learn a better language/text representation for longer sequences of text. We experiment with a few combinations for the hyperparameters-number of RNN nodes, number of layers, epochs and time steps were chosen empirically. The input text content needs to be fed into our RNN in the form of word embeddings. The system was built using 2 hidden LSTM layers and each LSTM cell has 512 nodes. The input data is split into mini batches of 10 and trained for 100 epochs with a learning rate of 2 × 10 −3 . The sequence length was selected as 20. We use cross − entropy or sof tmax optimization technique (Goodfellow et al., 2016) to compute the training loss, Adam optimization technique (Kingma and Ba, 2014) is used for updating weights. The system was trained on an Amazon Web Services EC2 Deep Learning instance using an Nvidia Tesla K80 GPU. The training takes about 4 hours. Text Generation and Sampling The trained model is used to generate the email body based on the nature of the input. We varied the sampling technique of generating the new characters for the text generation. Generation phase. Feeding a word ( w 0 ) into the trained LSTM network model, will output the word most likely to occur after w 0 as w 1 depending on P ( w 1 | w 0 ). If we want to generate a text body of n words, we feed w 1 to the RNN model and get the next word by evaluating P ( w 2 | w 0 , w 1 ). This is done repeatedly to generate a text sequence with n words: w 0 , w 1 , w 2 , ..., w n . Sampling parameters. We vary our sampling parameters to generate the email body samples. For our implementation, we choose temperature as the best parameter. Given a sequence of words for training, w 0 , w 1 , w 2 , ..., w n , the goal of the trained LSTM network is to predict the best set of words that follow the training sequence as the output ( w 0 , w 1 , w 2 , ..., w n ). Based on the input set of words, the model builds a probability distribution P (w t+1 | w t ′ ≤t ) = sof tmax( w t ), here sof tmax normalization with temperature control (Temp) is defined as: P (sof tmax( w j t )) = K( w j t ,T emp) n j=1 K( w j t ,T emp) , where K( w j t , T emp) = e w j t T emp The novelty or eccentricity of the RNN text generative model can be evaluated by varying the Temperature parameter between 0 < T emp. ≤ 1.0 to generate samples of text (the maximum value is 1.0). We vary the nature of the model's predictions using two main mechanismsdeterministic and stochastic. Lower values of T emp. generates relatively deterministic samples while higher values can make the process more stochastic. Both the mechanisms suffer from issues, deterministic samples can suffer from repetitive text while the samples generated using the stochastic mechanism are prone to spelling mistakes, grammatical errors, nonsensical words. We generate our samples by varying the temperature values to 0.2, 0.5, 0.7 and 1.0. For our evaluation and detection experiments, we randomly select 25 system generated samples, 2 samples generated at a temperature of 0.2, 10 samples at temperature 0.5, 5 samples at a temperature of 0.7 and 8 samples at temperature 1.0. Customization of Malicious Intent One important aspect of malicious emails is their harmful intent. The perfect attack vector will have malicious elements like a poisonous link or malware attachment wrapped in legitimate context, something which is sly enough to fool both a state-of-the-art email classifier as well as the victim. One novelty of this system training is the procedure of injecting malicious intent during training and generating malicious content in the synthetic emails. We followed a percentage based influx of malicious content into the training model along with the legitimate emails. The training models were built by varying the percentage (5%, 10%, 30% and 50%) of phishing emails selected from the entire phishing dataset along with the entire legitimate emails dataset. We trained separate RNN models on all these configurations. For studying the varying content in emails, we generate samples using temperature values at 0.2, 0.5, 0.7 and 1.0. Detection using Existing Algorithms We perform a simple quantitative evaluation by using three text-based classification algorithms on our generated emails. Using the Python SciKit-Learn library, we test three popular text-based filtering algorithms -Support Vector Machines (Maldonado and L'Huillier, 2013), Naive Bayes (Witten et al., 2016) and Logistic Regression (Franklin, 2005). The training set was modeled as a document-term matrix and the word count vector is used as the feature for building the models. For our evaluation, we train models using Support Vector Machines (SVM), Naive Bayes (NB) and Logistic Regression (LR) models on a training data of 300 legitimate emails from WikiLeaks archives 2 and 150 phishing emails from Cornell PhishBowl (IT@Cornell, 2018). We test the data on 100 legitimate emails from WikiLeaks archives that were not included in the training set and 25 'fake' emails that were generated by our natural language generation model. Analysis and Results We discuss the results of the generative RNN model in this section. We give examples of the email text generated with various training models and varying temperatures. We also provide the accuracy of the trained classifiers on a subset of these generated email bodies (after slight post processing). We try to provide a qualitative as well as a quantitative review of the generated emails. Examples of Machine generated emails (A) Training only on Legitimates and varying sampling temperatures We show examples of emails generated using models trained on legitimate emails and sampled using a temperature of 1.0. Example I at Temperature = 1.0: Dear <NME> The article in the <NME> offers promotion should be somewhat changed for the next two weeks. <NME> See your presentation today. <NME> Example II Example I at Temperature = 0.7: Sir I will really see if they were more comments tomorrow and review and act upon this evening <NET>. The engineer I can add there some <LINK> there are the issues <NET>. Could you give me a basis for the call him he said The example above shows that while small substrings make some sense. The sequence of text fragments generated make very little sense when read as a whole. When comparing these with the phishing email structure described in (Drake et al., 2004), the generated emails have very little malicious content. The red text marks the incongruous text pieces that do not make sense. (B) Training on Legitimates + 5% Malicious content: In the first step of intent injection, we generate emails by providing the model with all the legitimate emails and 5% of the cleaned phishing emails data (Table 3). Thus for this model, we create the input data with 603 legitimate emails and 114 randomly selected phishing emails. We show as examples two samples generated using temperature values equal to 0.5 and 0.7. Example I at Temperature = 0.5: Sir Here are the above info on a waste of anyone, but an additional figure and it goes to <NET>. Do I <NET> got the opportunity for a possible position between our Saturday <NME> or <NET> going to look over you in a presentation you will even need <NET> to drop off the phone. Example II at Temperature = 0.7: Hi owners <NET> your Private <NET> email from <NET> at <NET> email <NET> Information I'll know our pending your fake check to eol thanks <NET> and would be In maintenance in a long online demand The model thus consists of benign and malicious emails in an approximate ratio of 5:1. Some intent and urgency can be seen in the email context. But the incongruent words still remain. (C) Training on Legitimates + 30% Malicious content: We further improve upon the model proposed in (B). In this training step, we feed our text generator all the legitimate emails (603 benign) coupled with 30% of the malicious emails data (683 malicious). This is an almost balanced dataset of benign and phishing emails. The following examples demonstrate the variation in text content in the generated emails. Example I at Temperature = 0.5: Sir we account access will do so may not the emails about the <NET> This <NET> is included at 3 days while when to <NET> because link below to update your account until the deadline we will received this information that we will know that your <NET> account information needs Example II at Temperature = 1.0: Dear registered secur= online, number: hearing from This trade guarded please account go to pay it. To modify your Account then fill in necessary from your notification preferences, please PayPal account provided with the integrity of information on the Alerts tab. A good amount of text seems to align with the features of malicious emails described in (Drake et al., 2004) have malicious intent in it. We choose two examples to demonstrate the nature of text in the generated emails. We include examples from further evaluation of steps. (D) Training on Legitimates + 50% Malicious content: In this training step, we consider a total of 50% of the malicious data (1140 phishing emails) and 603 legitimate emails. This is done to observe whether training on an unbalanced data, with twice the ratio of malign instances than legitimate ones, can successfully incorporate obvious malicious flags like poisonous links, attachments, etc. We show two examples of emails generated using deep learners at varying sampling temperatures. Example I at Temperature = 0.7: If you are still online. genuine information in the message, notice your account has been frozen to your account in order to restore your account as click on CONTINUE Payment Contact <LINK> UK. Example IT at Temperature = 0.5: Hi will have temporarily information your account will be restricted during that the Internet accounts and upgrading password An data Thank you for your our security of your Account Please click on it using the <NET> server This is an new offer miles with us as a qualified and move in The generated text reflects malicious features like URL links and tone of urgency. We can assume that the model picks up important cues of malign behavior. The model then learns to incorporate such cues into the sampled data during training phase. Evaluation using Detection Algorithm We train text classification models using Support Vector Machines (SVM), Naive Bayes (NB) and Logistic Regression (LR) models on a training data of 300 legitimate emails from WikiLeaks archives 3 and 150 phishing emails from Cornell PhishBowl (IT@Cornell, 2018). We test the data on 100 legitimate emails from WikiLeaks archives that were not included in the training set and 25 'fake' emails that were generated by our natural language generation model trained on a mix of legitimate and 50% malicious emails. We randomly select the emails (the distribution is: 2 samples generated at a temperature of 0.2, 10 samples at temperature 0.5, 5 samples at a temperature of 0.7 and 8 samples at temperature 1.0) for our evaluation. We use the Scikit-Learn Python library to generate the document-term matrix and the word count vector from a given sample of email text body used as a feature for train-3 https://wikileaks.org/ ing the classification models. The Table 4 reports the accuracy, precision, recall, and F1-scores on the test dataset using SVM, Naive Bayes and Logistic Regression classifiers. Comparison of emails with another NLG model The authors in (Baki et al., 2017) discuss using a Recursive Transition Network for generating fake emails similar in nature to legitimate emails. The paper discusses a user study testing the efficacy of these fake emails and their effectiveness in being used for deceiving people. The authors use only legitimate emails to train their model and generate emails similar to their training data -termed as 'fake' emails. In this section, we compare a couple of examples selected randomly from the emails generated by the Dada Engine used in (Baki et al., 2017) and the outputs of our Deep Learning system generated emails. Generated by the RNN (Example I): Hi will have temporarily information your account will be restricted during that the Internet accounts and upgrading password An data Thank you for your our security of your Account Please click on it using the < N ET > server This is an new offer miles with us as a qualified and move in Generated by the RNN (Example II): Sir Kindly limit, it [IMAGE] Please contact us contained on this suspension will not be = interrupted by 10 product, or this temporary cost some of the Generated by the Dada Engine: Great job on the op-ed! Are you going to submit? Also, Who will be attending? The examples provide evidence that emails generated by the RNN are more on the lines of phishing emails than the emails generated by the Dada Engine. Of course, the goal of the email generated by the Dada engine is masquerade, not phishing. Because of the rule-based method employed that uses complete sentences, the emails generated by the Dada engine have fewer problems of coherence and grammaticality. Error Analysis We review two types of errors observed in the evaluation of our RNN text generation models developed in this study. First, the text generated by multiple RNN models suffer from repetitive tags and words. The second aspect of error analysis is to look at the misclassification by the statistical detection algorithms. Here we look at a small sample of emails that were marked as legitimate despite being fake in nature. We try to investigate the factors in the example sample that can explain the misclassification errors by the algorithms. Example (A): Hi GHT location <EID> Inc Dear <NET> Password Location <NET> of <NET> program We have been riding to meet In a of your personal program or other browser buyer buyer The email does not commit to a secure F or security before You may read a inconvenience during Thank you <NET> Example (B): Sir we account access will do so may not the emails about the <NET> This <NET> is included at 3 days while when to <NET> because the link below to update your account until the deadline we will received this information that we will know that your <NET> account information needs Example (C): Sir This is an verificati= <LINK> messaging center, have to inform you that we are conducting more software, Regarding Your Password : <LINK> & June 20, 2009 Webmail Please Click Here to Confirm Examples (A), (B) and (C) are emails generated from a model trained on legitimate and 50% of phishing data (Type (D) in Section 4.1.) using a temperature of 0.7. There can be quite a few reasons for the misclassification -almost all the above emails despite being 'fake' in nature have considerable overlap with words common to the legitimate text. Moreover, Example (A) has lesser magnitude of indication of malicious intent. And the amount of malicious intent in Example (B), although notable to the human eye, is enough to fool a simple text-based email classification algorithm. Example (C) has multiple link tags implying possible malicious intent or presence of poisonous links. However, the position of these links play an important role in deceiving the classifier. A majority of phishing emails have links at the end of the text body or after some action words like click, look, here, confirm etc. In this case, the links have been placed at arbitrary locations inside the text sequence -thereby making it harder to detect. These misclassification or errors on part of the classifier can be eliminated by human intervention or by designing a more sensitive and sophisticated detection algorithm. Conclusions and Future Work While the RNN model generated text which had 'some' malicious intent in them -the examples shown above are just a few steps from being coherent and congruous. We designed an RNN based text generation system for generating targeted attack emails which is a challenging task in itself and a novel approach to the best of our knowledge. The examples generated however suffer from random strings and grammatical errors. We identify a few areas of improvement for the proposed system -reduction of repetitive content as well as inclusion of more legitimate and phishing examples for analysis and model training. We would also like to experiment with addition of topics and tags like 'bank account', 'paypal', 'password renewal', etc. which may help generate more specific emails. It would be interesting to see how a generative RNN handles topic based email generation problem.
4,536
1908.06893
2967126358
With an increasing number of malicious attacks, the number of people and organizations falling prey to social engineering attacks is proliferating. Despite considerable research in mitigation systems, attackers continually improve their modus operandi by using sophisticated machine learning, natural language processing techniques with an intent to launch successful targeted attacks aimed at deceiving detection mechanisms as well as the victims. We propose a system for advanced email masquerading attacks using Natural Language Generation (NLG) techniques. Using legitimate as well as an influx of varying malicious content, the proposed deep learning system generates emails with malicious content, customized depending on the attacker's intent. The system leverages Recurrent Neural Networks (RNNs) for automated text generation. We also focus on the performance of the generated emails in defeating statistical detectors, and compare and analyze the emails using a proposed baseline.
The system used for synthesizing emails in this work is somewhat aligned along the lines of the methodology described in @cite_14 @cite_9 . However, our proposed system has no manual labor involved and with some level of post processing has been shown to deceive an automated supervised classification system.
{ "abstract": [ "This paper describes a two-stage process for stochastic generation of email, in which the first stage structures the emails according to sender style and topic structure (high-level generation), and the second stage synthesizes text content based on the particulars of an email element and the goals of a given communication (surface-level realization). Synthesized emails were rated in a preliminary experiment. The results indicate that sender style can be detected. In addition we found that stochastic generation performs better if applied at the word level than at an original-sentence level (“template-based”) in terms of email coherence, sentence fluency, naturalness, and preference.", "This paper presents the design and implementation details of an email synthesizer using two-stage stochastic natural language generation, where the first stage structures the emails according to sender style and topic structure, and the second stage synthesizes text content based on the particulars of an email structure element and the goals of a given communication for surface realization. The synthesized emails reflect sender style and the intent of communication, which can be further used as synthetic evidence for developing other applications." ], "cite_N": [ "@cite_9", "@cite_14" ], "mid": [ "2250397977", "2251741394" ] }
Automated email Generation for Targeted Attacks using Natural Language
The continuous adversarial growth and learning has been one of the major challenges in the field of Cybersecurity. With the immense boom in usage and adaptation of the Internet, staggering numbers of individuals and organizations have fallen prey to targeted attacks like phishing and pharming. Such attacks result in digital identity theft causing personal and financial losses to unknowing victims. Over the past decade, researchers have proposed a wide variety of detection methods to counter such attacks (e.g., see (Verma and Hossain, 2013;Thakur and Verma, 2014;Verma and Dyer, 2015;Verma and Rai, 2015;Verma and Das, 2017), and references cited therein). However, wrongdoers have exploited cyber resources to launch newer and sophisticated attacks to evade machine and human supervision. Detection systems and algorithms are commonly trained on historical data and attack patterns. Innovative attack vectors can trick these pre-trained detection and classification techniques and cause harm to the victims. Email is a common attack vector used by phishers that can be embedded with poisonous links to malicious websites, malign attachments like malware executables, etc (Drake et al., 2004). Anti-Phishing Working Group (APWG) has identified a total of 121,860 unique phishing email reports in March 2017. In 2016, APWG received over 1,313,771 unique phishing complaints. According to sources in IRS Return Integrity Compliance Services, around 870 organizations had received W-2 based phishing scams in the first quarter of 2017, which has increased significantly from 100 organizations in 2016. And the phishing scenario keeps getting worse as attackers use more intelligent and sophisticated ways of scamming victims. Fraudulent emails targeted towards the victim may be constructed using a variety of techniques fine-tuned to create the perfect deception. While manually fine-tuning such emails guarantees a higher probability of a successful attack, it requires a considerable amount of time. Phishers are always looking for automated means for launching fast and effective attack vectors. Some of these techniques include bulk mailing or spamming, including action words and links in a phishing email, etc. But these can be easily classified as positive warnings owing to improved statistical detection models. Email masquerading is also a popular cyberattack technique where a phisher or scammer after gaining access to an individual's email inbox or outbox can study the nature/content of the emails sent or received by the target. He can then synthesize targeted malicious emails masqueraded as a benign email by incorporating features observed in the target's emails. The chances of such an attack being detected by an automated pre-trained classifier is reduced. The malicious email remain undetected, thereby increasing the chances of a successful attack. Current Natural Language Generation (NLG) techniques have allowed researchers to generate natural language text based on a given context. Highly sophisticated and trained NLG systems can involve text generation based on predefined grammar like the Dada Engine (Baki et al., 2017) or leverage deep learning neural networks like RNN (Yao et al., 2017) for generating text. Such an approach essentially facilitates the machine to learn a model that emulates the input to the system. The system can then be made to generate text that closely resembles the input structure and form. Such NLG systems can therefore become dangerous tools in the hands of phishers. Advanced deep learning neural networks (DNNs) can be effectively used to generate coherent sequences of text when trained on suitable textual content. Researchers have used such systems for generating textual content across a wide variety of genres -from tweets (Sidhaye and Cheung, 2015) to poetry (Ghazvininejad et al., 2016). Thus we can assume it is not long before phishers and spammers can use email datasets -legitimate and malicious -in conjunction with DNNs to generate deceptive malicious emails. By masquerading the properties of a legitimate email, such carefully crafted emails can deceive pre-trained email detectors, thus making people and organizations vulnerable to phishing scams. In this paper, we address the new class of attacks based on automated fake email generation. We start off by demonstrating the practical usage of DNNs for fake email generation and walk through a process of fine-tuning the system, varying a set of parameters that control the content and intent of the text. The key contributions of this paper are: 1. A study of the feasibility and effectiveness of deep learning techniques in email generation. 2. Demonstration of an automated system for generation of 'fake' targeted emails with a malicious intent. 3. Fine-tuning synthetic email content depending on training data -intent and content parameter tuning. 4. Comparison with a baseline -synthetic emails generated by Dada engine (Baki et al., 2017). 5. Detection of synthetic emails using a statistical detector and investigation of effectiveness in tricking an existing spam email classifier (built using SVM). Experimental Methodology The section has been divided into four subsections. We describe the nature and source of the training and evaluation data in Section 3.1. The pre-processing steps are demonstrated in Section 3.2. The system setup and experimental settings have been described in Section 3.3. Data description To best emulate a benign email, a text generator must learn the text representation in actual legitimate emails. Therefore, it is necessary to incorporate benign emails in training the model. However, as a successful attacker, our main aim is to create the perfect deceptive email -one which despite having malign components like poisoned links or attachments, looks legitimate enough to bypass statistical detectors and human supervision. Primarily, for the reasons stated above, we have used multiple email datasets, belonging to both legitimate and malicious classes, for training the system model and also in the quantitative evaluation and comparison steps. For our training model, we use a larger ratio of malicious emails compared to legitimate data (approximate ratio of benign to malicious is 1:4). Legitimate dataset. We use three sets of legitimate emails for modeling our legitimate content. The legitimate emails were primarily extracted from the outbox and inbox of real individuals. Thus the text contains a lot of named entities belonging to PERSON, LOC and ORGANIZATION types. The emails have been extracted from three different sources stated below: • 48 emails sent by Sarah Palin (Source 1) and 55 from Hillary Clinton (Source 2) obtained from the archives released in (The New York Times, 2011; WikiLeaks, 2016) respectively. • 500 emails from the Sent items folder of the employees from the Enron email corpus (Source 3) (Enron Corpus, 2015). Malicious dataset. The malicious dataset was difficult to acquire. We used two malicious sources of data mentioned below: • 197 Phishing emails collected by the second authorcalled Verma phish below. • 3392 Phishing emails from Jose Nazario's Phishing corpus 1 (Source 2) Evaluation dataset. We compared our system's output against a small set of automatically generated emails provided by the authors of (Baki et al., 2017). The provided set consists of 12 emails automatically generated using the Dada Engine and manually generated grammar rules. The set consists of 6 emails masquerading as Hillary Clinton emails and 6 emails masquerading as emails from Sarah Palin. Tables 1 and 2 describe some statistical details about the legitimate and malicious datasets used in this system. We define length (L) as the number of words in the body of an email. We define Vocabulary (V ) as the number of unique words in an email. A few observations from the datasets above: the malicious content is relatively more verbose than than the legitimate counterparts. Moreover, the size of the malicious data is comparatively higher compared to the legitimate content. Data Filtering and Preprocessing We considered some important steps for preprocessing the important textual content in the data. Below are the common preprocessing steps applied to the data: • Removal of special characters like @, #, $, % as well as common punctuations from the email body. • emails usually have other URLs or email IDs. These can pollute and confuse the learning model as to what are the more important words in the text. Therefore, we replaced the URLs and the email addresses with the <LINK> and <EID> tags respectively. • Replacement of named entities with the <NET> tag. We use Python NLTK NER for identification of the named entities. On close inspection of the training data, we found that the phishing emails had incoherent HTML content which can pollute the training model. Therefore, from the original data (in Table 2), we carefully filter out the emails that were not in English, and the ones that had all the text data was embedded in HTML. These emails usually had a lot of random character strings -thus the learning model could be polluted with such random text. Only the phishing emails in our datasets had such issues. Table 3 gives the details about the filtered phishing dataset. Experimental Setup We use a deep learning framework for the Natural Language Generation model. The system used for learning the email model is developed using Tensorflow 1.3.0 and Python 3.5. This section provides a background on a Recurrent Neural Network for text generation. Deep Neural Networks are complex models for computation with deeply connected networks of neurons to solve complicated machine learning tasks. Recurrent Neural Networks (RNNs) are a type of deep learning networks better suited for sequential data. RNNs can be used to learn character and word sequences from natural language text (used for training). The RNN system used in this paper is capable of generating text by varying levels of granularity, i.e. at the character level or word level. For our training and evaluation, we make use of Word-based RNNs since previous text generation systems (Xie et al., 2017), (Henderson et al., 2014) have generated coherent and readable content using word-level models. A comparison between Character-based and Wordbased LSTMs in (Xie et al., 2017) proved that for a sample of generated text sequence, word level models have lower perplexity than character level deep learners. This is because the character-based text generators suffer from spelling errors and incoherent text fragments. 3.3.1. RNN architecture Traditional language models like N-grams are limited by the history or the sequence of the textual content that these models are able to look back upon while training. However, RNNs are able to retain the long term information provided by some text sequence, making it work as a "memory"based model. However while building a model, RNNs are not the best performers when it comes to preserving long term dependencies. For this reason we use Long Short Term Memory architectures (LSTM) networks which are able to learn a better language/text representation for longer sequences of text. We experiment with a few combinations for the hyperparameters-number of RNN nodes, number of layers, epochs and time steps were chosen empirically. The input text content needs to be fed into our RNN in the form of word embeddings. The system was built using 2 hidden LSTM layers and each LSTM cell has 512 nodes. The input data is split into mini batches of 10 and trained for 100 epochs with a learning rate of 2 × 10 −3 . The sequence length was selected as 20. We use cross − entropy or sof tmax optimization technique (Goodfellow et al., 2016) to compute the training loss, Adam optimization technique (Kingma and Ba, 2014) is used for updating weights. The system was trained on an Amazon Web Services EC2 Deep Learning instance using an Nvidia Tesla K80 GPU. The training takes about 4 hours. Text Generation and Sampling The trained model is used to generate the email body based on the nature of the input. We varied the sampling technique of generating the new characters for the text generation. Generation phase. Feeding a word ( w 0 ) into the trained LSTM network model, will output the word most likely to occur after w 0 as w 1 depending on P ( w 1 | w 0 ). If we want to generate a text body of n words, we feed w 1 to the RNN model and get the next word by evaluating P ( w 2 | w 0 , w 1 ). This is done repeatedly to generate a text sequence with n words: w 0 , w 1 , w 2 , ..., w n . Sampling parameters. We vary our sampling parameters to generate the email body samples. For our implementation, we choose temperature as the best parameter. Given a sequence of words for training, w 0 , w 1 , w 2 , ..., w n , the goal of the trained LSTM network is to predict the best set of words that follow the training sequence as the output ( w 0 , w 1 , w 2 , ..., w n ). Based on the input set of words, the model builds a probability distribution P (w t+1 | w t ′ ≤t ) = sof tmax( w t ), here sof tmax normalization with temperature control (Temp) is defined as: P (sof tmax( w j t )) = K( w j t ,T emp) n j=1 K( w j t ,T emp) , where K( w j t , T emp) = e w j t T emp The novelty or eccentricity of the RNN text generative model can be evaluated by varying the Temperature parameter between 0 < T emp. ≤ 1.0 to generate samples of text (the maximum value is 1.0). We vary the nature of the model's predictions using two main mechanismsdeterministic and stochastic. Lower values of T emp. generates relatively deterministic samples while higher values can make the process more stochastic. Both the mechanisms suffer from issues, deterministic samples can suffer from repetitive text while the samples generated using the stochastic mechanism are prone to spelling mistakes, grammatical errors, nonsensical words. We generate our samples by varying the temperature values to 0.2, 0.5, 0.7 and 1.0. For our evaluation and detection experiments, we randomly select 25 system generated samples, 2 samples generated at a temperature of 0.2, 10 samples at temperature 0.5, 5 samples at a temperature of 0.7 and 8 samples at temperature 1.0. Customization of Malicious Intent One important aspect of malicious emails is their harmful intent. The perfect attack vector will have malicious elements like a poisonous link or malware attachment wrapped in legitimate context, something which is sly enough to fool both a state-of-the-art email classifier as well as the victim. One novelty of this system training is the procedure of injecting malicious intent during training and generating malicious content in the synthetic emails. We followed a percentage based influx of malicious content into the training model along with the legitimate emails. The training models were built by varying the percentage (5%, 10%, 30% and 50%) of phishing emails selected from the entire phishing dataset along with the entire legitimate emails dataset. We trained separate RNN models on all these configurations. For studying the varying content in emails, we generate samples using temperature values at 0.2, 0.5, 0.7 and 1.0. Detection using Existing Algorithms We perform a simple quantitative evaluation by using three text-based classification algorithms on our generated emails. Using the Python SciKit-Learn library, we test three popular text-based filtering algorithms -Support Vector Machines (Maldonado and L'Huillier, 2013), Naive Bayes (Witten et al., 2016) and Logistic Regression (Franklin, 2005). The training set was modeled as a document-term matrix and the word count vector is used as the feature for building the models. For our evaluation, we train models using Support Vector Machines (SVM), Naive Bayes (NB) and Logistic Regression (LR) models on a training data of 300 legitimate emails from WikiLeaks archives 2 and 150 phishing emails from Cornell PhishBowl (IT@Cornell, 2018). We test the data on 100 legitimate emails from WikiLeaks archives that were not included in the training set and 25 'fake' emails that were generated by our natural language generation model. Analysis and Results We discuss the results of the generative RNN model in this section. We give examples of the email text generated with various training models and varying temperatures. We also provide the accuracy of the trained classifiers on a subset of these generated email bodies (after slight post processing). We try to provide a qualitative as well as a quantitative review of the generated emails. Examples of Machine generated emails (A) Training only on Legitimates and varying sampling temperatures We show examples of emails generated using models trained on legitimate emails and sampled using a temperature of 1.0. Example I at Temperature = 1.0: Dear <NME> The article in the <NME> offers promotion should be somewhat changed for the next two weeks. <NME> See your presentation today. <NME> Example II Example I at Temperature = 0.7: Sir I will really see if they were more comments tomorrow and review and act upon this evening <NET>. The engineer I can add there some <LINK> there are the issues <NET>. Could you give me a basis for the call him he said The example above shows that while small substrings make some sense. The sequence of text fragments generated make very little sense when read as a whole. When comparing these with the phishing email structure described in (Drake et al., 2004), the generated emails have very little malicious content. The red text marks the incongruous text pieces that do not make sense. (B) Training on Legitimates + 5% Malicious content: In the first step of intent injection, we generate emails by providing the model with all the legitimate emails and 5% of the cleaned phishing emails data (Table 3). Thus for this model, we create the input data with 603 legitimate emails and 114 randomly selected phishing emails. We show as examples two samples generated using temperature values equal to 0.5 and 0.7. Example I at Temperature = 0.5: Sir Here are the above info on a waste of anyone, but an additional figure and it goes to <NET>. Do I <NET> got the opportunity for a possible position between our Saturday <NME> or <NET> going to look over you in a presentation you will even need <NET> to drop off the phone. Example II at Temperature = 0.7: Hi owners <NET> your Private <NET> email from <NET> at <NET> email <NET> Information I'll know our pending your fake check to eol thanks <NET> and would be In maintenance in a long online demand The model thus consists of benign and malicious emails in an approximate ratio of 5:1. Some intent and urgency can be seen in the email context. But the incongruent words still remain. (C) Training on Legitimates + 30% Malicious content: We further improve upon the model proposed in (B). In this training step, we feed our text generator all the legitimate emails (603 benign) coupled with 30% of the malicious emails data (683 malicious). This is an almost balanced dataset of benign and phishing emails. The following examples demonstrate the variation in text content in the generated emails. Example I at Temperature = 0.5: Sir we account access will do so may not the emails about the <NET> This <NET> is included at 3 days while when to <NET> because link below to update your account until the deadline we will received this information that we will know that your <NET> account information needs Example II at Temperature = 1.0: Dear registered secur= online, number: hearing from This trade guarded please account go to pay it. To modify your Account then fill in necessary from your notification preferences, please PayPal account provided with the integrity of information on the Alerts tab. A good amount of text seems to align with the features of malicious emails described in (Drake et al., 2004) have malicious intent in it. We choose two examples to demonstrate the nature of text in the generated emails. We include examples from further evaluation of steps. (D) Training on Legitimates + 50% Malicious content: In this training step, we consider a total of 50% of the malicious data (1140 phishing emails) and 603 legitimate emails. This is done to observe whether training on an unbalanced data, with twice the ratio of malign instances than legitimate ones, can successfully incorporate obvious malicious flags like poisonous links, attachments, etc. We show two examples of emails generated using deep learners at varying sampling temperatures. Example I at Temperature = 0.7: If you are still online. genuine information in the message, notice your account has been frozen to your account in order to restore your account as click on CONTINUE Payment Contact <LINK> UK. Example IT at Temperature = 0.5: Hi will have temporarily information your account will be restricted during that the Internet accounts and upgrading password An data Thank you for your our security of your Account Please click on it using the <NET> server This is an new offer miles with us as a qualified and move in The generated text reflects malicious features like URL links and tone of urgency. We can assume that the model picks up important cues of malign behavior. The model then learns to incorporate such cues into the sampled data during training phase. Evaluation using Detection Algorithm We train text classification models using Support Vector Machines (SVM), Naive Bayes (NB) and Logistic Regression (LR) models on a training data of 300 legitimate emails from WikiLeaks archives 3 and 150 phishing emails from Cornell PhishBowl (IT@Cornell, 2018). We test the data on 100 legitimate emails from WikiLeaks archives that were not included in the training set and 25 'fake' emails that were generated by our natural language generation model trained on a mix of legitimate and 50% malicious emails. We randomly select the emails (the distribution is: 2 samples generated at a temperature of 0.2, 10 samples at temperature 0.5, 5 samples at a temperature of 0.7 and 8 samples at temperature 1.0) for our evaluation. We use the Scikit-Learn Python library to generate the document-term matrix and the word count vector from a given sample of email text body used as a feature for train-3 https://wikileaks.org/ ing the classification models. The Table 4 reports the accuracy, precision, recall, and F1-scores on the test dataset using SVM, Naive Bayes and Logistic Regression classifiers. Comparison of emails with another NLG model The authors in (Baki et al., 2017) discuss using a Recursive Transition Network for generating fake emails similar in nature to legitimate emails. The paper discusses a user study testing the efficacy of these fake emails and their effectiveness in being used for deceiving people. The authors use only legitimate emails to train their model and generate emails similar to their training data -termed as 'fake' emails. In this section, we compare a couple of examples selected randomly from the emails generated by the Dada Engine used in (Baki et al., 2017) and the outputs of our Deep Learning system generated emails. Generated by the RNN (Example I): Hi will have temporarily information your account will be restricted during that the Internet accounts and upgrading password An data Thank you for your our security of your Account Please click on it using the < N ET > server This is an new offer miles with us as a qualified and move in Generated by the RNN (Example II): Sir Kindly limit, it [IMAGE] Please contact us contained on this suspension will not be = interrupted by 10 product, or this temporary cost some of the Generated by the Dada Engine: Great job on the op-ed! Are you going to submit? Also, Who will be attending? The examples provide evidence that emails generated by the RNN are more on the lines of phishing emails than the emails generated by the Dada Engine. Of course, the goal of the email generated by the Dada engine is masquerade, not phishing. Because of the rule-based method employed that uses complete sentences, the emails generated by the Dada engine have fewer problems of coherence and grammaticality. Error Analysis We review two types of errors observed in the evaluation of our RNN text generation models developed in this study. First, the text generated by multiple RNN models suffer from repetitive tags and words. The second aspect of error analysis is to look at the misclassification by the statistical detection algorithms. Here we look at a small sample of emails that were marked as legitimate despite being fake in nature. We try to investigate the factors in the example sample that can explain the misclassification errors by the algorithms. Example (A): Hi GHT location <EID> Inc Dear <NET> Password Location <NET> of <NET> program We have been riding to meet In a of your personal program or other browser buyer buyer The email does not commit to a secure F or security before You may read a inconvenience during Thank you <NET> Example (B): Sir we account access will do so may not the emails about the <NET> This <NET> is included at 3 days while when to <NET> because the link below to update your account until the deadline we will received this information that we will know that your <NET> account information needs Example (C): Sir This is an verificati= <LINK> messaging center, have to inform you that we are conducting more software, Regarding Your Password : <LINK> & June 20, 2009 Webmail Please Click Here to Confirm Examples (A), (B) and (C) are emails generated from a model trained on legitimate and 50% of phishing data (Type (D) in Section 4.1.) using a temperature of 0.7. There can be quite a few reasons for the misclassification -almost all the above emails despite being 'fake' in nature have considerable overlap with words common to the legitimate text. Moreover, Example (A) has lesser magnitude of indication of malicious intent. And the amount of malicious intent in Example (B), although notable to the human eye, is enough to fool a simple text-based email classification algorithm. Example (C) has multiple link tags implying possible malicious intent or presence of poisonous links. However, the position of these links play an important role in deceiving the classifier. A majority of phishing emails have links at the end of the text body or after some action words like click, look, here, confirm etc. In this case, the links have been placed at arbitrary locations inside the text sequence -thereby making it harder to detect. These misclassification or errors on part of the classifier can be eliminated by human intervention or by designing a more sensitive and sophisticated detection algorithm. Conclusions and Future Work While the RNN model generated text which had 'some' malicious intent in them -the examples shown above are just a few steps from being coherent and congruous. We designed an RNN based text generation system for generating targeted attack emails which is a challenging task in itself and a novel approach to the best of our knowledge. The examples generated however suffer from random strings and grammatical errors. We identify a few areas of improvement for the proposed system -reduction of repetitive content as well as inclusion of more legitimate and phishing examples for analysis and model training. We would also like to experiment with addition of topics and tags like 'bank account', 'paypal', 'password renewal', etc. which may help generate more specific emails. It would be interesting to see how a generative RNN handles topic based email generation problem.
4,536
1908.06893
2967126358
With an increasing number of malicious attacks, the number of people and organizations falling prey to social engineering attacks is proliferating. Despite considerable research in mitigation systems, attackers continually improve their modus operandi by using sophisticated machine learning, natural language processing techniques with an intent to launch successful targeted attacks aimed at deceiving detection mechanisms as well as the victims. We propose a system for advanced email masquerading attacks using Natural Language Generation (NLG) techniques. Using legitimate as well as an influx of varying malicious content, the proposed deep learning system generates emails with malicious content, customized depending on the attacker's intent. The system leverages Recurrent Neural Networks (RNNs) for automated text generation. We also focus on the performance of the generated emails in defeating statistical detectors, and compare and analyze the emails using a proposed baseline.
In this paper, we focus primarily on generation of fake emails specifically engineered for phishing and scamming victims. Additionally, we also look at some state-of-the-art phishing email detection systems. Researchers in @cite_24 extract a large number of text body, URL and HTML features from emails, which are then fed into supervised (SVMs, Neural Networks) as well as unsupervised (K-Means clustering) algorithms for the final verdict on the email nature. The system proposed in @cite_5 extracts 25 stylistic and structural features from emails, which are given to a supervised SVM for analysis of email nature. Newer techniques for phishing email detection based on textual content analysis have been proposed in @cite_26 @cite_20 @cite_12 @cite_3 . Masquerade attacks are generated by the system proposed in @cite_6 , which tunes the generated emails based on legitimate content and style of a famous personality. Moreover, this technique can be exploited by phishers for launching email masquerade attacks, therefore making such a system extremely dangerous.
{ "abstract": [ "Phishing causes billions of dollars in damage every year and poses a serious threat to the Internet economy. Email is still the most commonly used medium to launch phishing attacks [1]. In this paper, we present a comprehensive natural language based scheme to detect phishing emails using features that are invariant and fundamentally characterize phishing. Our scheme utilizes all the information present in an email, namely, the header, the links and the text in the body. Although it is obvious that a phishing email is designed to elicit an action from the intended victim, none of the existing detection schemes use this fact to identify phishing emails. Our detection protocol is designed specifically to distinguish between “actionable” and “informational” emails. To this end, we incorporate natural language techniques in phishing detection. We also utilize contextual information, when available, to detect phishing: we study the problem of phishing detection within the contextual confines of the user’s email box and demonstrate that context plays an important role in detection. To the best of our knowledge, this is the first scheme that utilizes natural language techniques and contextual information to detect phishing. We show that our scheme outperforms existing phishing detection schemes. Finally, our protocol detects phishing at the email level rather than detecting masqueraded websites. This is crucial to prevent the victim from clicking any harmful links in the email. Our implementation called PhishNet-NLP, operates between a user’s mail transfer agent (MTA) and mail user agent (MUA) and processes each arriving email for phishing attacks even before reaching the inbox.", "", "We focus on email-based attacks, a rich field with well-publicized consequences. We show how current Natural Language Generation (NLG) technology allows an attacker to generate masquerade attacks on scale, and study their effectiveness with a within-subjects study. We also gather insights on what parts of an email do users focus on and how users identify attacks in this realm, by planting signals and also by asking them for their reasoning. We find that: (i) 17 of participants could not identify any of the signals that were inserted in emails, and (ii) Participants were unable to perform better than random guessing on these attacks. The insights gathered and the tools and techniques employed could help defenders in: (i) implementing new, customized anti-phishing solutions for Internet users including training next-generation email filters that go beyond vanilla spam filters and capable of addressing masquerade, (ii) more effectively training and upgrading the skills of email users, and (iii) understanding the dynamics of this novel attack and its ability of tricking humans.", "Phishing is a form of identity theft that occurs when a malicious Web site impersonates a legitimate one in order to acquire sensitive information such as passwords, account details, or credit card numbers.Though there are several anti-phishing software and techniques for detecting potential phishing attempts in emails and detecting phishing contents on websites, phishers come up with new and hybrid techniques to circumvent the available software and techniques.", "", "In a phishing attack, an unsuspecting victim is lured, typically via an email, to a web site designed to steal sensitive information such as bank credit card account numbers, login information for accounts, etc. Each year Internet users lose billions of dollars to this scourge. In this paper, we present a general semantic feature selection method for text problems based on the statistical t-test and WordNet, and we show its effectiveness on phishing email detection by designing classifiers that combine semantics and statistics in analyzing the text in the email. Our feature selection method is general and useful for other applications involving text-based analysis as well. Our email body-text-only classifier achieves more than 95 accuracy on detecting phishing emails with a false positive rate of 2.24 . Due to its use of semantics, our feature selection method is robust against adaptive attacks and avoids the problem of frequent retraining needed by machine learning classifiers.", "Phishing email has become a popular solution among attackers to steal all kinds of data from people and easily breach organizations' security. Hackers use multiple techniques and tricks to raise the chances of success of their attacks, like using information found on social networking websites to tailor their emails to the target's interests, or targeting employees of an organization who probably can't spot a phishing email or malicious websites and avoid sending emails to IT people or employees from Security department. In this paper we focus on analyzing the coherence of information contained in the different parts of the email: Header, Body, and URLs. After analyzing multiple phishing emails we discovered that there is always incoherence between these different parts. We created a comprehensive method which uses a set of rules that correlates the information collected from analyzing the header, body and URLs of the email and can even include the user in the detection process. We take into account that there is no such thing called perfection, so even if an email is classified as legitimate, our system will still send a warning to the user if the email is suspicious enough. This way even if a phishing email manages to escape our system, the user can still be protected." ], "cite_N": [ "@cite_26", "@cite_3", "@cite_6", "@cite_24", "@cite_5", "@cite_20", "@cite_12" ], "mid": [ "150462035", "", "2598692538", "2148614760", "", "2156852385", "2602952449" ] }
Automated email Generation for Targeted Attacks using Natural Language
The continuous adversarial growth and learning has been one of the major challenges in the field of Cybersecurity. With the immense boom in usage and adaptation of the Internet, staggering numbers of individuals and organizations have fallen prey to targeted attacks like phishing and pharming. Such attacks result in digital identity theft causing personal and financial losses to unknowing victims. Over the past decade, researchers have proposed a wide variety of detection methods to counter such attacks (e.g., see (Verma and Hossain, 2013;Thakur and Verma, 2014;Verma and Dyer, 2015;Verma and Rai, 2015;Verma and Das, 2017), and references cited therein). However, wrongdoers have exploited cyber resources to launch newer and sophisticated attacks to evade machine and human supervision. Detection systems and algorithms are commonly trained on historical data and attack patterns. Innovative attack vectors can trick these pre-trained detection and classification techniques and cause harm to the victims. Email is a common attack vector used by phishers that can be embedded with poisonous links to malicious websites, malign attachments like malware executables, etc (Drake et al., 2004). Anti-Phishing Working Group (APWG) has identified a total of 121,860 unique phishing email reports in March 2017. In 2016, APWG received over 1,313,771 unique phishing complaints. According to sources in IRS Return Integrity Compliance Services, around 870 organizations had received W-2 based phishing scams in the first quarter of 2017, which has increased significantly from 100 organizations in 2016. And the phishing scenario keeps getting worse as attackers use more intelligent and sophisticated ways of scamming victims. Fraudulent emails targeted towards the victim may be constructed using a variety of techniques fine-tuned to create the perfect deception. While manually fine-tuning such emails guarantees a higher probability of a successful attack, it requires a considerable amount of time. Phishers are always looking for automated means for launching fast and effective attack vectors. Some of these techniques include bulk mailing or spamming, including action words and links in a phishing email, etc. But these can be easily classified as positive warnings owing to improved statistical detection models. Email masquerading is also a popular cyberattack technique where a phisher or scammer after gaining access to an individual's email inbox or outbox can study the nature/content of the emails sent or received by the target. He can then synthesize targeted malicious emails masqueraded as a benign email by incorporating features observed in the target's emails. The chances of such an attack being detected by an automated pre-trained classifier is reduced. The malicious email remain undetected, thereby increasing the chances of a successful attack. Current Natural Language Generation (NLG) techniques have allowed researchers to generate natural language text based on a given context. Highly sophisticated and trained NLG systems can involve text generation based on predefined grammar like the Dada Engine (Baki et al., 2017) or leverage deep learning neural networks like RNN (Yao et al., 2017) for generating text. Such an approach essentially facilitates the machine to learn a model that emulates the input to the system. The system can then be made to generate text that closely resembles the input structure and form. Such NLG systems can therefore become dangerous tools in the hands of phishers. Advanced deep learning neural networks (DNNs) can be effectively used to generate coherent sequences of text when trained on suitable textual content. Researchers have used such systems for generating textual content across a wide variety of genres -from tweets (Sidhaye and Cheung, 2015) to poetry (Ghazvininejad et al., 2016). Thus we can assume it is not long before phishers and spammers can use email datasets -legitimate and malicious -in conjunction with DNNs to generate deceptive malicious emails. By masquerading the properties of a legitimate email, such carefully crafted emails can deceive pre-trained email detectors, thus making people and organizations vulnerable to phishing scams. In this paper, we address the new class of attacks based on automated fake email generation. We start off by demonstrating the practical usage of DNNs for fake email generation and walk through a process of fine-tuning the system, varying a set of parameters that control the content and intent of the text. The key contributions of this paper are: 1. A study of the feasibility and effectiveness of deep learning techniques in email generation. 2. Demonstration of an automated system for generation of 'fake' targeted emails with a malicious intent. 3. Fine-tuning synthetic email content depending on training data -intent and content parameter tuning. 4. Comparison with a baseline -synthetic emails generated by Dada engine (Baki et al., 2017). 5. Detection of synthetic emails using a statistical detector and investigation of effectiveness in tricking an existing spam email classifier (built using SVM). Experimental Methodology The section has been divided into four subsections. We describe the nature and source of the training and evaluation data in Section 3.1. The pre-processing steps are demonstrated in Section 3.2. The system setup and experimental settings have been described in Section 3.3. Data description To best emulate a benign email, a text generator must learn the text representation in actual legitimate emails. Therefore, it is necessary to incorporate benign emails in training the model. However, as a successful attacker, our main aim is to create the perfect deceptive email -one which despite having malign components like poisoned links or attachments, looks legitimate enough to bypass statistical detectors and human supervision. Primarily, for the reasons stated above, we have used multiple email datasets, belonging to both legitimate and malicious classes, for training the system model and also in the quantitative evaluation and comparison steps. For our training model, we use a larger ratio of malicious emails compared to legitimate data (approximate ratio of benign to malicious is 1:4). Legitimate dataset. We use three sets of legitimate emails for modeling our legitimate content. The legitimate emails were primarily extracted from the outbox and inbox of real individuals. Thus the text contains a lot of named entities belonging to PERSON, LOC and ORGANIZATION types. The emails have been extracted from three different sources stated below: • 48 emails sent by Sarah Palin (Source 1) and 55 from Hillary Clinton (Source 2) obtained from the archives released in (The New York Times, 2011; WikiLeaks, 2016) respectively. • 500 emails from the Sent items folder of the employees from the Enron email corpus (Source 3) (Enron Corpus, 2015). Malicious dataset. The malicious dataset was difficult to acquire. We used two malicious sources of data mentioned below: • 197 Phishing emails collected by the second authorcalled Verma phish below. • 3392 Phishing emails from Jose Nazario's Phishing corpus 1 (Source 2) Evaluation dataset. We compared our system's output against a small set of automatically generated emails provided by the authors of (Baki et al., 2017). The provided set consists of 12 emails automatically generated using the Dada Engine and manually generated grammar rules. The set consists of 6 emails masquerading as Hillary Clinton emails and 6 emails masquerading as emails from Sarah Palin. Tables 1 and 2 describe some statistical details about the legitimate and malicious datasets used in this system. We define length (L) as the number of words in the body of an email. We define Vocabulary (V ) as the number of unique words in an email. A few observations from the datasets above: the malicious content is relatively more verbose than than the legitimate counterparts. Moreover, the size of the malicious data is comparatively higher compared to the legitimate content. Data Filtering and Preprocessing We considered some important steps for preprocessing the important textual content in the data. Below are the common preprocessing steps applied to the data: • Removal of special characters like @, #, $, % as well as common punctuations from the email body. • emails usually have other URLs or email IDs. These can pollute and confuse the learning model as to what are the more important words in the text. Therefore, we replaced the URLs and the email addresses with the <LINK> and <EID> tags respectively. • Replacement of named entities with the <NET> tag. We use Python NLTK NER for identification of the named entities. On close inspection of the training data, we found that the phishing emails had incoherent HTML content which can pollute the training model. Therefore, from the original data (in Table 2), we carefully filter out the emails that were not in English, and the ones that had all the text data was embedded in HTML. These emails usually had a lot of random character strings -thus the learning model could be polluted with such random text. Only the phishing emails in our datasets had such issues. Table 3 gives the details about the filtered phishing dataset. Experimental Setup We use a deep learning framework for the Natural Language Generation model. The system used for learning the email model is developed using Tensorflow 1.3.0 and Python 3.5. This section provides a background on a Recurrent Neural Network for text generation. Deep Neural Networks are complex models for computation with deeply connected networks of neurons to solve complicated machine learning tasks. Recurrent Neural Networks (RNNs) are a type of deep learning networks better suited for sequential data. RNNs can be used to learn character and word sequences from natural language text (used for training). The RNN system used in this paper is capable of generating text by varying levels of granularity, i.e. at the character level or word level. For our training and evaluation, we make use of Word-based RNNs since previous text generation systems (Xie et al., 2017), (Henderson et al., 2014) have generated coherent and readable content using word-level models. A comparison between Character-based and Wordbased LSTMs in (Xie et al., 2017) proved that for a sample of generated text sequence, word level models have lower perplexity than character level deep learners. This is because the character-based text generators suffer from spelling errors and incoherent text fragments. 3.3.1. RNN architecture Traditional language models like N-grams are limited by the history or the sequence of the textual content that these models are able to look back upon while training. However, RNNs are able to retain the long term information provided by some text sequence, making it work as a "memory"based model. However while building a model, RNNs are not the best performers when it comes to preserving long term dependencies. For this reason we use Long Short Term Memory architectures (LSTM) networks which are able to learn a better language/text representation for longer sequences of text. We experiment with a few combinations for the hyperparameters-number of RNN nodes, number of layers, epochs and time steps were chosen empirically. The input text content needs to be fed into our RNN in the form of word embeddings. The system was built using 2 hidden LSTM layers and each LSTM cell has 512 nodes. The input data is split into mini batches of 10 and trained for 100 epochs with a learning rate of 2 × 10 −3 . The sequence length was selected as 20. We use cross − entropy or sof tmax optimization technique (Goodfellow et al., 2016) to compute the training loss, Adam optimization technique (Kingma and Ba, 2014) is used for updating weights. The system was trained on an Amazon Web Services EC2 Deep Learning instance using an Nvidia Tesla K80 GPU. The training takes about 4 hours. Text Generation and Sampling The trained model is used to generate the email body based on the nature of the input. We varied the sampling technique of generating the new characters for the text generation. Generation phase. Feeding a word ( w 0 ) into the trained LSTM network model, will output the word most likely to occur after w 0 as w 1 depending on P ( w 1 | w 0 ). If we want to generate a text body of n words, we feed w 1 to the RNN model and get the next word by evaluating P ( w 2 | w 0 , w 1 ). This is done repeatedly to generate a text sequence with n words: w 0 , w 1 , w 2 , ..., w n . Sampling parameters. We vary our sampling parameters to generate the email body samples. For our implementation, we choose temperature as the best parameter. Given a sequence of words for training, w 0 , w 1 , w 2 , ..., w n , the goal of the trained LSTM network is to predict the best set of words that follow the training sequence as the output ( w 0 , w 1 , w 2 , ..., w n ). Based on the input set of words, the model builds a probability distribution P (w t+1 | w t ′ ≤t ) = sof tmax( w t ), here sof tmax normalization with temperature control (Temp) is defined as: P (sof tmax( w j t )) = K( w j t ,T emp) n j=1 K( w j t ,T emp) , where K( w j t , T emp) = e w j t T emp The novelty or eccentricity of the RNN text generative model can be evaluated by varying the Temperature parameter between 0 < T emp. ≤ 1.0 to generate samples of text (the maximum value is 1.0). We vary the nature of the model's predictions using two main mechanismsdeterministic and stochastic. Lower values of T emp. generates relatively deterministic samples while higher values can make the process more stochastic. Both the mechanisms suffer from issues, deterministic samples can suffer from repetitive text while the samples generated using the stochastic mechanism are prone to spelling mistakes, grammatical errors, nonsensical words. We generate our samples by varying the temperature values to 0.2, 0.5, 0.7 and 1.0. For our evaluation and detection experiments, we randomly select 25 system generated samples, 2 samples generated at a temperature of 0.2, 10 samples at temperature 0.5, 5 samples at a temperature of 0.7 and 8 samples at temperature 1.0. Customization of Malicious Intent One important aspect of malicious emails is their harmful intent. The perfect attack vector will have malicious elements like a poisonous link or malware attachment wrapped in legitimate context, something which is sly enough to fool both a state-of-the-art email classifier as well as the victim. One novelty of this system training is the procedure of injecting malicious intent during training and generating malicious content in the synthetic emails. We followed a percentage based influx of malicious content into the training model along with the legitimate emails. The training models were built by varying the percentage (5%, 10%, 30% and 50%) of phishing emails selected from the entire phishing dataset along with the entire legitimate emails dataset. We trained separate RNN models on all these configurations. For studying the varying content in emails, we generate samples using temperature values at 0.2, 0.5, 0.7 and 1.0. Detection using Existing Algorithms We perform a simple quantitative evaluation by using three text-based classification algorithms on our generated emails. Using the Python SciKit-Learn library, we test three popular text-based filtering algorithms -Support Vector Machines (Maldonado and L'Huillier, 2013), Naive Bayes (Witten et al., 2016) and Logistic Regression (Franklin, 2005). The training set was modeled as a document-term matrix and the word count vector is used as the feature for building the models. For our evaluation, we train models using Support Vector Machines (SVM), Naive Bayes (NB) and Logistic Regression (LR) models on a training data of 300 legitimate emails from WikiLeaks archives 2 and 150 phishing emails from Cornell PhishBowl (IT@Cornell, 2018). We test the data on 100 legitimate emails from WikiLeaks archives that were not included in the training set and 25 'fake' emails that were generated by our natural language generation model. Analysis and Results We discuss the results of the generative RNN model in this section. We give examples of the email text generated with various training models and varying temperatures. We also provide the accuracy of the trained classifiers on a subset of these generated email bodies (after slight post processing). We try to provide a qualitative as well as a quantitative review of the generated emails. Examples of Machine generated emails (A) Training only on Legitimates and varying sampling temperatures We show examples of emails generated using models trained on legitimate emails and sampled using a temperature of 1.0. Example I at Temperature = 1.0: Dear <NME> The article in the <NME> offers promotion should be somewhat changed for the next two weeks. <NME> See your presentation today. <NME> Example II Example I at Temperature = 0.7: Sir I will really see if they were more comments tomorrow and review and act upon this evening <NET>. The engineer I can add there some <LINK> there are the issues <NET>. Could you give me a basis for the call him he said The example above shows that while small substrings make some sense. The sequence of text fragments generated make very little sense when read as a whole. When comparing these with the phishing email structure described in (Drake et al., 2004), the generated emails have very little malicious content. The red text marks the incongruous text pieces that do not make sense. (B) Training on Legitimates + 5% Malicious content: In the first step of intent injection, we generate emails by providing the model with all the legitimate emails and 5% of the cleaned phishing emails data (Table 3). Thus for this model, we create the input data with 603 legitimate emails and 114 randomly selected phishing emails. We show as examples two samples generated using temperature values equal to 0.5 and 0.7. Example I at Temperature = 0.5: Sir Here are the above info on a waste of anyone, but an additional figure and it goes to <NET>. Do I <NET> got the opportunity for a possible position between our Saturday <NME> or <NET> going to look over you in a presentation you will even need <NET> to drop off the phone. Example II at Temperature = 0.7: Hi owners <NET> your Private <NET> email from <NET> at <NET> email <NET> Information I'll know our pending your fake check to eol thanks <NET> and would be In maintenance in a long online demand The model thus consists of benign and malicious emails in an approximate ratio of 5:1. Some intent and urgency can be seen in the email context. But the incongruent words still remain. (C) Training on Legitimates + 30% Malicious content: We further improve upon the model proposed in (B). In this training step, we feed our text generator all the legitimate emails (603 benign) coupled with 30% of the malicious emails data (683 malicious). This is an almost balanced dataset of benign and phishing emails. The following examples demonstrate the variation in text content in the generated emails. Example I at Temperature = 0.5: Sir we account access will do so may not the emails about the <NET> This <NET> is included at 3 days while when to <NET> because link below to update your account until the deadline we will received this information that we will know that your <NET> account information needs Example II at Temperature = 1.0: Dear registered secur= online, number: hearing from This trade guarded please account go to pay it. To modify your Account then fill in necessary from your notification preferences, please PayPal account provided with the integrity of information on the Alerts tab. A good amount of text seems to align with the features of malicious emails described in (Drake et al., 2004) have malicious intent in it. We choose two examples to demonstrate the nature of text in the generated emails. We include examples from further evaluation of steps. (D) Training on Legitimates + 50% Malicious content: In this training step, we consider a total of 50% of the malicious data (1140 phishing emails) and 603 legitimate emails. This is done to observe whether training on an unbalanced data, with twice the ratio of malign instances than legitimate ones, can successfully incorporate obvious malicious flags like poisonous links, attachments, etc. We show two examples of emails generated using deep learners at varying sampling temperatures. Example I at Temperature = 0.7: If you are still online. genuine information in the message, notice your account has been frozen to your account in order to restore your account as click on CONTINUE Payment Contact <LINK> UK. Example IT at Temperature = 0.5: Hi will have temporarily information your account will be restricted during that the Internet accounts and upgrading password An data Thank you for your our security of your Account Please click on it using the <NET> server This is an new offer miles with us as a qualified and move in The generated text reflects malicious features like URL links and tone of urgency. We can assume that the model picks up important cues of malign behavior. The model then learns to incorporate such cues into the sampled data during training phase. Evaluation using Detection Algorithm We train text classification models using Support Vector Machines (SVM), Naive Bayes (NB) and Logistic Regression (LR) models on a training data of 300 legitimate emails from WikiLeaks archives 3 and 150 phishing emails from Cornell PhishBowl (IT@Cornell, 2018). We test the data on 100 legitimate emails from WikiLeaks archives that were not included in the training set and 25 'fake' emails that were generated by our natural language generation model trained on a mix of legitimate and 50% malicious emails. We randomly select the emails (the distribution is: 2 samples generated at a temperature of 0.2, 10 samples at temperature 0.5, 5 samples at a temperature of 0.7 and 8 samples at temperature 1.0) for our evaluation. We use the Scikit-Learn Python library to generate the document-term matrix and the word count vector from a given sample of email text body used as a feature for train-3 https://wikileaks.org/ ing the classification models. The Table 4 reports the accuracy, precision, recall, and F1-scores on the test dataset using SVM, Naive Bayes and Logistic Regression classifiers. Comparison of emails with another NLG model The authors in (Baki et al., 2017) discuss using a Recursive Transition Network for generating fake emails similar in nature to legitimate emails. The paper discusses a user study testing the efficacy of these fake emails and their effectiveness in being used for deceiving people. The authors use only legitimate emails to train their model and generate emails similar to their training data -termed as 'fake' emails. In this section, we compare a couple of examples selected randomly from the emails generated by the Dada Engine used in (Baki et al., 2017) and the outputs of our Deep Learning system generated emails. Generated by the RNN (Example I): Hi will have temporarily information your account will be restricted during that the Internet accounts and upgrading password An data Thank you for your our security of your Account Please click on it using the < N ET > server This is an new offer miles with us as a qualified and move in Generated by the RNN (Example II): Sir Kindly limit, it [IMAGE] Please contact us contained on this suspension will not be = interrupted by 10 product, or this temporary cost some of the Generated by the Dada Engine: Great job on the op-ed! Are you going to submit? Also, Who will be attending? The examples provide evidence that emails generated by the RNN are more on the lines of phishing emails than the emails generated by the Dada Engine. Of course, the goal of the email generated by the Dada engine is masquerade, not phishing. Because of the rule-based method employed that uses complete sentences, the emails generated by the Dada engine have fewer problems of coherence and grammaticality. Error Analysis We review two types of errors observed in the evaluation of our RNN text generation models developed in this study. First, the text generated by multiple RNN models suffer from repetitive tags and words. The second aspect of error analysis is to look at the misclassification by the statistical detection algorithms. Here we look at a small sample of emails that were marked as legitimate despite being fake in nature. We try to investigate the factors in the example sample that can explain the misclassification errors by the algorithms. Example (A): Hi GHT location <EID> Inc Dear <NET> Password Location <NET> of <NET> program We have been riding to meet In a of your personal program or other browser buyer buyer The email does not commit to a secure F or security before You may read a inconvenience during Thank you <NET> Example (B): Sir we account access will do so may not the emails about the <NET> This <NET> is included at 3 days while when to <NET> because the link below to update your account until the deadline we will received this information that we will know that your <NET> account information needs Example (C): Sir This is an verificati= <LINK> messaging center, have to inform you that we are conducting more software, Regarding Your Password : <LINK> & June 20, 2009 Webmail Please Click Here to Confirm Examples (A), (B) and (C) are emails generated from a model trained on legitimate and 50% of phishing data (Type (D) in Section 4.1.) using a temperature of 0.7. There can be quite a few reasons for the misclassification -almost all the above emails despite being 'fake' in nature have considerable overlap with words common to the legitimate text. Moreover, Example (A) has lesser magnitude of indication of malicious intent. And the amount of malicious intent in Example (B), although notable to the human eye, is enough to fool a simple text-based email classification algorithm. Example (C) has multiple link tags implying possible malicious intent or presence of poisonous links. However, the position of these links play an important role in deceiving the classifier. A majority of phishing emails have links at the end of the text body or after some action words like click, look, here, confirm etc. In this case, the links have been placed at arbitrary locations inside the text sequence -thereby making it harder to detect. These misclassification or errors on part of the classifier can be eliminated by human intervention or by designing a more sensitive and sophisticated detection algorithm. Conclusions and Future Work While the RNN model generated text which had 'some' malicious intent in them -the examples shown above are just a few steps from being coherent and congruous. We designed an RNN based text generation system for generating targeted attack emails which is a challenging task in itself and a novel approach to the best of our knowledge. The examples generated however suffer from random strings and grammatical errors. We identify a few areas of improvement for the proposed system -reduction of repetitive content as well as inclusion of more legitimate and phishing examples for analysis and model training. We would also like to experiment with addition of topics and tags like 'bank account', 'paypal', 'password renewal', etc. which may help generate more specific emails. It would be interesting to see how a generative RNN handles topic based email generation problem.
4,536
1908.06851
2968588162
Fingerprinting techniques, which are a common method for indoor localization, have been recently applied with success into outdoor settings. Particularly, the communication signals of Low Power Wide Area Networks (LPWAN) such as Sigfox, have been used for localization. In this rather recent field of study, not many publicly available datasets, which would facilitate the consistent comparison of different positioning systems, exist so far. In the current study, a published dataset of RSSI measurements on a Sigfox network deployed in Antwerp, Belgium is used to analyse the appropriate selection of preprocessing steps and to tune the hyperparameters of a kNN fingerprinting method. Initially, the tuning of hyperparameter k for a variety of distance metrics, and the selection of efficient data transformation schemes, proposed by relevant works, is presented. In addition, accuracy improvements are achieved in this study, by a detailed examination of the appropriate adjustment of the parameters of the data transformation schemes tested, and of the handling of out of range values. With the appropriate tuning of these factors, the achieved mean localization error was 298 meters, and the median error was 109 meters. To facilitate the reproducibility of tests and comparability of results, the code and train validation test split used in this study are available.
The proliferation of Low Power Wide Area Networks (LPWAN), such as Sigfox and LoRaWAN, has brought a new domain of application of the fingerprinting methods. A recent study @cite_10 has experimentally verified the intuitive assumption that fingerprinting methods outperform, in terms of accuracy, proximity or ranging positioning methods, in a Sigfox setting.
{ "abstract": [ "Location-based services play an important role in Internet of Things (IoT) applications. However, a trade-off has to be made between the location estimation error and the battery lifetime of an IoT device. As IoT devices communicate over Low Power Wide Area Networks (LPWAN), signal strength localization methods can use the existing communication link to estimate their location. In this paper, we present a comparison of three proximity methods, one fingerprinting method and three ranging methods using Sigfox communication messages. To evaluate these methods, we use a ground truth Sigfox dataset which we collected in a large urban environment, as well as new evaluation data that was collected in the same urban area. With a mean estimation error of 586 m, our fingerprinting method achieves the best result compared to other signal strength localization methods." ], "cite_N": [ "@cite_10" ], "mid": [ "2902629385" ] }
A Reproducible Analysis of RSSI Fingerprinting for Outdoor Localization Using Sigfox: Preprocessing and Hyperparameter Tuning
The recent emergence of Internet of Things (IoT) technologies has made so that a plethora of low power devices make their appearance worldwide, in people's everyday life. The concept of smart cities becomes familiar to the broad public, and numerous applications are being proposed, implemented and deployed in domains such as massive gathering of sensor measurements, automatic control, asset tracking, etc. One domain of great interest concerning the IoT technologies is the offering of Location Based Services. Outdoor positioning is generally considered a solved problem, as various Global Navigation Satellite Systems (GNSS), Code: https://doi.org/10.5281/zenodo.3228752 Data: https://doi.org/10.5281/zenodo.3228744 Preprint c 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. such as the commonly known Global Positioning System (GPS), Galileo, GLONASS and BeiDou, have made outdoors positioning an everyday reality for most users of smartphones and custom positioning devices. These systems achieve an impressive accuracy in their estimates. Nevertheless, the battery consumption of their chipsets is considerable, and when it comes to low power IoT mobile devices, their usage is problematic. Therefore, an alternative way of localizing such devices is needed. The proliferation of IoT devices has been facilitated by the increasing marketization of different LPWAN technologies, such as Sigfox or LoRaWAN. The localization capabilities of these technologies have been tested in practice. The LoRa alliance has released a geolocalization white paper, presenting an overview of the methods tested and used by its members [1]. Similarly, the Sigfox company is advertising the capabilities of its localization service [2]. A detailed comparative analysis of the most prominent LPWAN technologies can be found in [3] and references therein. The architecture of these networks is straightforward. Basestations, which are connected to a central server, are deployed statically in urban and rural areas. Depending on the use-case, the low power devices might be statically deployed as, for instance, sensors repeating measurements in fixed locations, or might be mobile. In the latter case, they could be mounted on vehicles so that they report sensor measurements in various locations, or they could be used for asset tracking. The devices transmit messages formatted according to the protocol of the technology used, which is received by the basestations in range. All messages are centrally gathered to a central server. Apart from the content of the message which depends on the use case and the task assigned to the mobile devices, several other types of information concerning the transmission are being reported. These types may be: the Received Signal Strength Indicator (RSSI), the Time of Arrival (ToA), the Time Difference of Arrival (TDoA), the Logarithmic Signal over Noise Ratio (LSNR), etc. This information can be utilized by ranging techniques of localization, such a multilateration, to offer position estimates. Ranging techniques have the advantage that they do not require a surveying phase, and can been used directly, assuming knowledge of the basestations' locations. Nevertheless, since they do not inherently contain information related to the particularities of the environment over which they are used, they have a disadvantage, in terms of accuracy, when compared with fingerprinting methods. Fingerprinting methods rely on datasets that are collected throughout the area of interest. These datasets contain measurements of signal reception values characterizing known locations. These datasets of fingerprints recorded in known locations are used in order to build models which predict the unknown location of new signal receptions. A disadvantage of fingerprinting techniques is that the creation and maintenance of an up-to-date fingerprint database requires considerable effort and cost. One practical way to record such a dataset outdoors is presented in the work of Aernouts et al. [3]. In that work, a dataset is made publicly available and its collection methodology is presented. A total of 20 cars of the Belgian postal services were equipped with low power devices communicating via Sigfox and LoRaWAN with a central server. At the same time, the location of the car, as estimated by a GPS device, was also reported. In these datasets, the GPS estimates are considered as the spatial ground truth of the location of the message transmission. Recent works in the field of indoor and outdoor positioning [3], [4], [5] have underlined the fact that it is indispensable for the publications of the field to favour the reproducibility of the experiments and the comparability of the results. In this spirit, we utilize a publicly available Sigfox dataset [3], and we share with the community the code of the experiments of the current study (DOI:10.5281/zenodo.3228752), and the train/validation/test set split used in said dataset (DOI: 10.5281/zenodo.3228744), to facilitate a consistent comparison of results in future works. In the current study, we present a detailed examination of the hyperparameter tuning and of the process of selecting the most appropriate preprocessing methodologies for RSSI fingerprinting on an urban Sigfox setting. A systematic examination of the data preprocessing and the hyperparameter tuning steps can optimize the performance of a localization system. Consequently, it may offer an evaluation of the capabilities of the technology used. This work aims to exemplify such a process, and to characterize and report the capabilities of a Sigfox-based localization system on a well defined urban setting. The rest of this paper is organized as follows. In Section II, the work relevant to this subject is discussed. Section III presents in detail the dataset used in this work. In Section IV the preprocessing steps analysed in this work are presented. After a concise presentation of the experimental setup in Section V, an extensive presentation and discussion of the results is developed in Section VI. Finally, conclusions drawn and ideas for future work are presented in Section VII. III. THE DATASET USED Aenrouts et al. [3] have made publicly available 3 fingerprinting datasets of Low Power Wide Area Networks. Two of these datasets were collected in the urban area of Antwerp, one using Sigfox and another using LoRaWAN. The third dataset was collected in the rural area between the towns of Antwerp and Ghent, using Sigfox. The authors underline their motivation by mentioning in their work that: 'With these datasets, we intend to provide the global research community with a benchmark tool to evaluate fingerprinting algorithms for LPWAN standards.' In this work, we have used the first dataset, containing Sigfox messages in the urban area of Antwerp. The fingerprints are collected in an area of approximately 53 square kilometers, though the majority of them lay in the central area of Antwerp which is approximately half the size of the full area. A total number of 14378 messages are reported in the dataset. Each message contains the following information: the RSSI value of the transmitted signal by each of the 84 base stations, and the spatial ground truth of the signal's transmission location, as estimated by a GPS device, alongside the Sigfox devices. Undeniably, the fact that an estimate which is subject to error is used as ground truth introduces bias. Nevertheless, the error of GPS is at the order of a few tens of meters while the localization accuracy ranges at the order of several hundreds of meters. In order to obtain a feeling of the distribution of the RSSI values of the dataset, the histogram of all 317126 RSSI values that are present in the dataset is plotted in Figure 1. A big part of the distribution (more that 60% of the data) is concentrated in the [−140, −120] range, having almost 10% of the data in the value range above −100. In cases where a basestation did not receive a message, an out of range RSSI value of −200 was inserted, in order not to leave an empty value and also for differentiating this entry from the minimum RSSI values actually received (−156). The out of range values of −200 were not included in the histogram's creation. For the purposes of our study, we have split the dataset into a training, a validation and a test set containing 70%, 15%, and 15% of the sample respectively (10063, 2157 and 2157 entries, in absolute numbers). The spatial distribution of the ground truth locations of the three aforementioned sets can be seen in Figure 2. In the spirit of verifiability, reproducibility and comparability of results, it is important not only to report performance metrics over a publicly available dataset, but to also report the specific way the dataset is split into a training, a validation and a test set. In this way, researchers will be able to train models on the same training set, make decisions over the optimal tuning at the same validation set, and most importantly report unbiased performance results on the same test set. The three subsets that are used in the current study are made available to the research community for future reference. For the cases that a different training and validation strategy is desired, or an entirely different dataset is to be used, the full code implementation is available so that the same tests can be reproduced in these different settings. IV. DATA PREPROCESSING In a highly cited work, Torres-Sospedra et al. [14] have presented a systematic study of data preprocessing methods, and of distance and similarity metrics for Wi-Fi fingerprinting in indoor positioning systems. In that work, the authors have shown the great significance of an appropriate data preprocessing step, and have presented four alternative methods of preprocessing the fingerprint data. These four data representation methods are the following: As defined in [14], positive values data representation subtracts the minimum RSSI value from all the entries. P ositive i (x) =      (RSS i − min) if basestation i received the message and RSS i ≥ τ 0 otherwise (1) where RSS i is the RSS from the i th basetation, min is the lowest RSS value minus 1 among all RSS values of the database [14]. Also, τ is a threshold value, used so that basetations with intensities lower than τ are considered as notdetected, and the lowest possible value is assigned to them. It is worth emphasizing here that, since utilizing in the training set's preprocessing any kind of information coming from the validation or the test set would introduce information leakage, the min should only be calculated by the training set data. Calculating the min, or normalizing according to the data of the full dataset, would imply a breach of protocol of the train/validation/testing scheme, where the validation and testing sets are supposed to be completely unknown during the preprocessing phase. • Replace all values RSS I < τ with τ , where τ ≥ min • Define P ositive i (x) as: P ositive i (x) = RSS I − τ(2) The normalized data representation, normalizes the positive values data representation into the [0,1] range. (To be noted that the RSSI values are negative) The exponential and powed representations are the result of the intention to go beyond the linear handling of the RSSI values, since the RSSI values correspond to a logarithmic scale. More particularly, the exponential and powed representations are defined as follows: N ormalized i (x) = P ositive i (x)/(−min)(3)Exponential i (x) = exp( P ositivei(x) α ) exp( −min α ) (4) P owed i (x) = (P ositive i (x)) β (−min) β )(5) The proposed default values for the parameters α and β are α = 24 and β = e, where e is the mathematical constant. Nevertheless, since these parameters were selected upon testing with Wi-Fi signals in an indoor setting, which has a different range and distribution of RSSI values, it is a good advice to adjust them in accordance with a new setting. This ground setting analysis of data processioning presented in [14], has been used as a reference in the work by Janssen et al. [13], and is studied in further detail in the current work. The impact of these preprocessing steps and of the appropriate tuning of their parameters on the achieved positioning accuracy is extensively discussed in Section VI, along with all results of this study. V. EXPERIMENTAL SETUP In this study, a detailed examination of various prepossessing methods is presented, along with a hyperparameter tuning of the k-nearest neighbours (kNN) method. For the experiments presented in this study the free machine learning library for the Python language, scikit-learn, was used. Particularly, scikit-learn version 0.19.1 and Python 3.5.5 were used. The Haversine formula has been used for measuring distances on the reported experiments. Upon completion of the experiments, the Vincenty formula, which gives the shortest geodesic distance on an ellipsoid modeled earth, was also tested. It is straightforward that the comparisons among the performance of models remain consistent if either of the two formulas is used. The absolute distances measured with the two formulas differ by less than 0.5%. The common train/validation/test set division methodology is used on the analyses presented in this work. The train set is used to train the model. In the case of knn, no training process needs to take place per se, thus the training set practically creates the space of neighbors among which each new fingerprint is compared against. The validation set is used for evaluating different candidate models and checking the appropriateness of the selected hyperparameter values and preprocessing steps. Based on the performance on the validation set, the optimal model configuration is chosen. As both train and validation sets have participated in the configuration of the final model, an unbiased final evaluation of the model's performance needs to take place using a third set, containing previously unseen data: the test set. Therefore, during all preprocessing, training and tuning steps, the test set is inaccessible, and no information stemming from it is to be used. Initially, similarly to the work by Janssen et al. [13], we examine various distance metrics (the full list of the Distance Metrics class of scikit-learn) and tune the hyperparameter k for each distance metric. We perform this analysis twice: once using the dataset as it is with the out-of-range −200 value unchanged, and then again, replacing it with the experimentally found minimum of actually received RSSI values in the training set minus one (−157). Moreover, we examine the performance of a wide range of candidate values of threshold value τ of Equation 2. Lastly, concerning the parametric preprocessing representations, exponential and powed, a tuning of their respective parameters (α and β) is performed. VI. RESULTS A. Distance Metrics, Data Preprocessing and k In this test, we examine various distance metrics and tune the hyperparameter k for each distance metric. In the experiments of the current study, all four data representations defined in Equations 2-5 were tested as a preprocessing step. The results of these tests, presented in Tables I and II, have verified a fact that was observed in previous works as well ([14], [13]): the exponential and powed representations clearly outperform the positive and normalized data representations, for all distance metrics used. For simplicity, in Tables I and II we only report the results of the two most performant methods, exponential and powed. It should be underlined that selection of the optimal hypermarameter value k and the localization error statistics reported in Tables I and II are calculated on the validation set. Moreover, in the context of this test, the default values of the parameters (α = 24 and β = e) of the exponential and powed representations are used. Concerning the distance metrics, apart from those reported in Tables I and II, another family of distance metrics available in scikit-learn was evaluated, but proven to be entirely unsuitable. That family of metrics such as the Jaccard, Matching, Dice or Kulsinski distance, are intended for boolean-valued vector spaces, setting as True any non-zero entry. Consequently, those representations utilize a binary type of information stating if each basestation has received the message or not. Overall, the results show that, in accordance with the previous works, the Bray-Curtis metric (equivalent to the Sørensen metric mentioned in previous works [14], [13] offers the best results in terms of accuracy. Comparing the results of Tables I and II, the following conclusions can be drown. In Table I, exponential representation is more performant than powed, for all distance metrics except for the Canberra distance. On the contrary, in Table II it is powed representation that offers better results, for all but one distance metrics. In Table I the best performance is achieved by the Bray-Curtis metric and the exponential data representation, with a mean and median error of 344 and 148 meters respectively. For the results of Table II, the best performance is achieved by the Bray-Curtis metric and the powed data representation, with 319 and 123 meters of mean and median error respectively. The corresponding mean and median values of error on the test set are, 301 and 109 respectively. While the results obtained by the positive and normalized representation were not reported in Tables I and II, it is worth to briefly discuss the performance they achieve. The Bray-Curtis distance metric remains the best performing one for both data representations. Since the normalized representation is just a rescaled version of the positive data representation, both methods achieve identical results in this setting. When the dataset is used as is, meaning with τ = −200, the mean error on the validation set is 552 meters, while for τ = −157, the error is 400 meters. A first observation is that an appropriate transformation of the RSS values may change entirely the level of accuracy, leading from an initial error of 552 meters in the linear handling of the RSS values, to a 344 meters error. Furthermore, adjusting the τ value, and therefore, the way outof-range values are set, may further improve the performance, thus the 319 meters mean error observed in Table II. In an attempt of an interpretation of this difference in performance among the two different threshold values used, the following arguments are presented. It appears that the selection of an out-of-range value may drastically affect the performance of the positioning method. In the studied dataset, the −200 outof-range value is quite distinct to the experimental minimum RSSI found in the dataset (−156). This artificial gap between these two values may assign a significant importance to the distinction between a very distant gateway receiving the signal and a non-receiving one. On the other hand, bridging this gap may treat these two cases as more similar, and favor the distinction of RSSI values among closer detected basestations, improving the distance measurement between fingerprints, and consequently the efficient selection of the closest neighbours. B. Threshold Value τ In this test, the impact of the value of the τ threshold on the localization performance is examined. The best configuration found so far is used (k = 6, with a powed data representation), for examining all τ values in the range [−200, −130]. Setting τ with a value higher than the experimental minimum of −156, will replace all values in the range [−156, τ ] with τ . Out-ofrange values that are lower that the experimental minimum, are simply set to τ . The results of this analysis are reported in Figure 3. It is evident that the best performance comes from values around the experimental minimum. In particular, the lowest mean error in the validation set was 317 meters, and it was given by τ = −159. The corresponding performance in the test set was 298 meters mean error and 109 meters median error. The optimal value τ = −159 is just below the experimental minimum of received RSSI values, which suggests that putting the out-ofrange value just below the experimental minimum is the best option, according to this test. C. Parameters a and b of the Exponential and Eowed Data Representations As presented in Section IV, the exponential and powed data representations rely on a parameter (α and β respectively). The study that introduced these two representations [14] recommends default values for these parameters, upon experimentation with data coming from indoor WiFi measurements. The range and distribution of those values are different than those of the dataset used in this work. Therefore, an appropriate adjustment of the values of parameters α and β would adapt these data transformations to optimally fit this different setting. 1) Parameter α of the Exponential Data Representation: Regarding the parameter α of the exponential data representation, a range of candidate values has been tested. It is reminded that the default value of α is 24. Integer values in the range [10,40] have been evaluated in the best configuration found so far, concerning the exponential data representation. Thus, the Bray-Curtis distance has been used with k = 5 and τ = −157. The results are presented in the plot of Figure 4. The value α = 19 provides the lowest mean validation error of 339 meters. The corresponding test set performance is characterized by a mean error of 318 meters and a median of 117 meters. In the plot of Figure 4 it can be observed that there is a significant difference in performance comparing to the default value (α = 24). In Figure 5, the results of a similar examination are depicted. This time, the test spans the space of candidate values of both the parameter α of the exponential data representation and of the k hyperparameter of kNN. The minimum mean validation error is provided by α = 18 and k = 4, and is equal to 336 meters. The corresponding performance in the test set shows a mean error of 322 meters and a median error of 110 meters. There are several tuples of values of α and k in the proximity of the ones reported above that provide very similar results. 2) Parameter β of the Exponential Data Representation: An analysis similar to the previous one is performed for the parameter β of the powed data representation. The range [2,3] has been spanned with a granularity of 0.02. The default value of β is the constant e, which is equal to 2.718. The best configuration found concerning the powed data representation has been used. Thus, the Bray-Curtis distance has been used with k = 6. The results are presented in the plot of Figure 6. The value β = 2.6 provides the lowest mean validation error of 318 meters. The mean error on the test set is 298 meters and the median 108 meters. In this case that parameter β is studied, the difference of performance of the best β value compared to the one of the default value, is less significant than in the relevant analysis of α that preceded. Indicatively, it is noted that the default value of β gave a mean error of 319 meters. In Figure 7, the results for combinations of candidates values of both the parameter β of the powed data representation and of the k hyperparameter of kNN are reported. The minimum mean validation error is provided by β = 2.6 and k = 6, same as in the previous analysis of Figure 6. The convex form surface of Figure 7 reveals that the lowest values of the mean error appear in a close proximity , since there are several tuples of values α and k in the proximity of the optimal ones reported above that provide very similar results. VII. CONCLUSIONS AND FUTURE WORK In this work, we presented a detailed study of the process of selecting the most appropriate preprocessing methodologies and performing hyperparameter tuning for RSSI fingerprinting in an urban Sigfox setting. We have discussed the ways of appropriately preprocessing of the RSSI data, so as to improve the accuracy of the studied fingerprinting localization method. Moreover, identifying the limits of the achievable accuracy of a Sigfox-based positioning system deployed in an urban area has a been a main motivation of this work. The examination of the results of this study may offer several take-away messages. A linear handling of the RSSI values, with the positive and normalized data representations, achieved an above 500-meter mean error on the validation set. Introducing non-linear transformations, proposed by the relevant literature as more appropriate for the handling of RSSI values which correspond to a logarithmic scale, reduced the error to the level of 344 meters. To the best of our knowledge, this study is the first go beyond these steps and further optimize the handling of out-of-range values and the tuning of the parameters α and β of the preprocessing data transformation, so as to match the particularities of this outdoors setting, which does not utilize a Wi-Fi system as the original paper [14], but another technology, namely Sigfox. The best performing setting proposed in this work achieves a mean error of 317 meters on the validation set. This setting achieves a 298-meter error on the test set, with the corresponding median error being 109 meters. A comparison with the performance of previous works would not be strictly consistent, even if the same dataset has been used, since the validation and test sets are not the same. Nevertheless, we could discuss the order of magnitude of the error achieved. In the initial work where the used dataset became available [3], the linear handling of the RSSI value resulted in a high mean error of 689 meters. A more recent work [13], has utilized the data preprocessing methods proposed by Torres-Sospedra et al. [14], reporting a mean validation error of 340. The results of Table I of subsection VI-A, report a similar mean error of 344 meters, for the same best data representation found in [13], the exponential. The aditional analysis of the threshold value τ , and the appropriate adjustment of the preprocessing parameters α and β proposed by this work, further improves the localization accuracy. Moreover, it is noteworthy that the appropriate adjustment of τ , sets a different preproseccing method as the best performing one. While in Table I of subsection VI-A, as well as in the work of Janssen et al. [13], the exponential data representation is suggested by the obtained results, the appropriate adjustment of τ sets the powed data representation as preferable. It is these preprocessing adjustments that give a mean error of 317 meters on the validation set and 298 meters in the test set, with 121 and 109 meters for the corresponding median errors. The improvements in accuracy obtained by firstly using and secondly tuning these preprocessing steps indicate the significance of these steps. An appropriate preprocessing of the data may have huge impact on the performance of the positioning system using them, and thus it is recommended as an indispensable step of the model selection process A driving force of motivation of the authors has been the intention to encourage the sharing of material among the positioning community, which can facilitate consistent comparisons, and accelerate the advancement of the field. Being thankful for the public offering of the dataset to the community by Aernouts et al. [3], we proceed in sharing the dataset's split into train/test/validation sets, used in this work, as well as the code used for the tests. As localization with LPWANs is a rather recent field of study, there aren't many publicly available datasets so far. Moreover, the size of the dataset plays a crucial role, when machine learning approaches are used for localization. A dense spatial sampling of the area of interest can positively affect the localization performance. Additionally, other types of measurements, apart from the RSSI value, such as ToA, TDoA or LSNR values, would be of great interest to be studied. The intention of the authors is to work in the direction of creating and sharing such datasets. We also invite the community to embrace this effort that can accelerate the improvement of the field. Lastly, an interesting future direction for studies such as the current one may be the evaluation of the computational complexity of the machine learning methods and the distance metrics that are tested. In the current work, the focus has been on the performance of the positioning system in terms of accuracy. Nevertheless, in practical settings factors such as the computational complexity may play a crucial role in the selection of the model used.
4,732
1908.06388
2968455557
In this paper, we design a drug release mechanism for dynamic time division multiple access (TDMA)-based molecular communication via diffusion (MCvD). In the proposed scheme, the communication frame is divided into several time slots over each of which a transmitter nanomachine is scheduled to convey its information by releasing the molecules into the medium. To optimize the number of released molecules and the time duration of each time slot (symbol duration), we formulate a multi-objective optimization problem whose objective functions are the bit error rate (BER) of each transmitter nanomachine. Based on the number of released molecules and symbol durations, we consider four cases, namely: "static-time static-number of molecules" (STSN), "static-time dynamic-number of molecules" (STDN), "dynamic-time static-number of molecules" (DTSN), and "dynamic-time dynamic-number of molecules" (DTDN). We consider three types of medium in which the molecules are propagated, namely: "mild diffusive environment" (MDE), "moderate diffusive environment" (MODE), and "severe diffusive environment" (SDE). For the channel model, we consider a 3-dimensional (3D) diffusive environment, such as blood, with drift in three directions. Simulation results show that the STSN approach is the least complex one with BER around @math , but, the DTDN is the most complex scenario with the BER around @math .
Researchers study the TDMA optimization in neuron-based MC, which employs neurons to communicate and built in-body sensor-actuator networks (IBSANs) @cite_23 . They use an evolutionary multi-objective optimization algorithm to design the TDMA schedule. The resource allocation in MC has already studied for two transmitter nodes in @cite_10 where the authors propose a game-theoretic framework and study Bit Error Rate (BER) of such a system. In addition, the investigation of the channel capacity for multiple-access channels, which employs the principles of natural ligand-receptor is studied in @cite_24 . Furthermore, the researchers have found a high capacity in Single-input Single-output (SISO) and Multi-input Single-output (MISO)-based MC system @cite_24 . The investigation of more than two transmitter nodes in multiple access channel in existing works has not been considered yet. In addition, TDMA in Molecular Communication via Diffusion (MCvD) system has not been studied in the existing works on MC. The optimization of symbol durations and the number of released molecules by each transmitter node is also not considered in the existing works.
{ "abstract": [ "Molecular communication is a new nano-scale communication paradigm that enables nanomachines to communicate with each other by emitting molecules to their surrounding environment. Nanonetworks are also envisioned to be composed of a number of nanomachines with molecular communication capability that are deployed in an environment to share specific molecular information such as odor, flavour, light, or any chemical state. In this paper, using the principles of natural ligand-receptor binding mechanisms in biology, we first derive a capacity expression for single molecular channel in which a single Transmitter Nanomachine (TN) communicates with a single Receiver Nanomachine (RN). Then, we investigate the capacity of the molecular multiple-access channel in which multiple TNs communicate with a single RN. Numerical results reveal that high molecular communication capacities can be attainable for the single and multiple-access molecular channels.", "Currently, communication between nanomachines is an important topic for the development of novel devices. To implement a nanocommunication system, diffusion-based molecular communication is considered as a promising bio-inspired approach. Various technical issues about molecular communications, including channel capacity, noise and interference, and modulation and coding, have been studied in the literature, while the resource allocation problem among multiple nanomachines has not been well investigated, which is a very important issue since all the nanomachines share the same propagation medium. Considering the limited computation capability of nanomachines and the expensive information exchange cost among them, in this paper, we propose a game-theoretic framework for distributed resource allocation in nanoscale molecular communication systems. We first analyze the inter-symbol and inter-user interference, as well as bit error rate performance, in the molecular communication system. Based on the interference analysis, we formulate the resource allocation problem as a non-cooperative molecule emission control game, where the Nash equilibrium is found and proved to be unique. In order to improve the system efficiency while guaranteeing fairness, we further model the resource allocation problem using a cooperative game based on the Nash bargaining solution, which is proved to be proportionally fair. Simulation results show that the Nash bargaining solution can effectively ensure fairness among multiple nanomachines while achieving comparable social welfare performance with the centralized scheme.", "This paper proposes and evaluates Neuronal TDMA, a TDMA-based signaling protocol framework for molecular communication, which utilizes neurons as a primary component to build in-body sensor-actuator networks (IBSANs). Neuronal TDMA leverages an evolutionary multiobjective optimization algorithm (EMOA) that optimizes the signaling schedule for nanomachines in IBSANs. The proposed EMOA uses a population of solution candidates, each of which represents a particular signaling schedule, and evolves them via several operators such as selection, crossover, mutation and offspring size adjustment. The evolution process is performed to seek Pareto-optimal signaling schedules subject to given constraints. Simulation results verify that the proposed EMOA efficiently obtains quality solutions. It outperforms several conventional EMOAs." ], "cite_N": [ "@cite_24", "@cite_10", "@cite_23" ], "mid": [ "2140013661", "2119160818", "2031540173" ] }
0
1908.05908
2969050910
In this paper, we report our method for the Information Extraction task in 2019 Language and Intelligence Challenge. We incorporate BERT into the multi-head selection framework for joint entity-relation extraction. This model extends existing approaches from three perspectives. First, BERT is adopted as a feature extraction layer at the bottom of the multi-head selection framework. We further optimize BERT by introducing a semantic-enhanced task during BERT pre-training. Second, we introduce a large-scale Baidu Baike corpus for entity recognition pre-training, which is of weekly supervised learning since there is no actual named entity label. Third, soft label embedding is proposed to effectively transmit information between entity recognition and relation extraction. Combining these three contributions, we enhance the information extracting ability of the multi-head selection model and achieve F1-score 0.876 on testset-1 with a single model. By ensembling four variants of our model, we finally achieve F1 score 0.892 (1st place) on testset-1 and F1 score 0.8924 (2nd place) on testset-2.
Recent years, great efforts have been made on extracting relational fact from unstructured raw texts to build large structural knowledge bases. A relational fact is often represented as a triplet which consists of two entities (subject and object) and semantic relation between them. Early works @cite_13 @cite_0 @cite_14 mainly focused on the task of relation classification which assumes the entity pair are identified beforehand. This limits their practical application since they neglect the extraction of entities. To extract both entities and their relation, existing methods can be divided into two categories : the pipelined framework, which first uses sequence labeling models to extract entities, and then uses relation classification models to identify the relation between each entity pair; and the joint approach, which combines the entity model and the relation model through different strategies, such as constraints or parameters sharing. * 2mm
{ "abstract": [ "The state-of-the-art methods used for relation classification are primarily based on statistical machine learning, and their performance strongly depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing natural language processing (NLP) systems, which leads to the propagation of the errors in the existing tools and hinders the performance of these systems. In this paper, we exploit a convolutional deep neural network (DNN) to extract lexical and sentence level features. Our method takes all of the word tokens as input without complicated pre-processing. First, the word tokens are transformed to vectors by looking up word embeddings 1 . Then, lexical level features are extracted according to the given nouns. Meanwhile, sentence level features are learned using a convolutional approach. These two level features are concatenated to form the final extracted feature vector. Finally, the features are fed into a softmax classifier to predict the relationship between two marked nouns. The experimental results demonstrate that our approach significantly outperforms the state-of-the-art methods.", "Syntactic features play an essential role in identifying relationship in a sentence. Previous neural network models directly work on raw word sequences or constituent parse trees, thus often suffer from irrelevant information introduced when subjects and objects are in a long distance. In this paper, we propose to learn more robust relation representations from shortest dependency paths through a convolution neural network. We further take the relation directionality into account and propose a straightforward negative sampling strategy to improve the assignment of subjects and objects. Experimental results show that our method outperforms the state-of-theart approaches on the SemEval-2010 Task 8 dataset.", "We present a brief overview of the main challenges in the extraction of semantic relations from English text, and discuss the shortcomings of previous data sets and shared tasks. This leads us to introduce a new task, which will be part of SemEval-2010: multi-way classification of mutually exclusive semantic relations between pairs of common nominals. The task is designed to compare different approaches to the problem and to provide a standard testbed for future research, which can benefit many applications in Natural Language Processing." ], "cite_N": [ "@cite_0", "@cite_14", "@cite_13" ], "mid": [ "2250521169", "1551842868", "2099779943" ] }
BERT-Based Multi-Head Selection for Joint Entity-Relation Extraction
Given a sentence and a list of pre-defined schemas which define the relation P and the classes of its corresponding subject S and object O, for example, (S TYPE: Person, P: wife, O TYPE: Person), (S TYPE: Company, P: founder, O TYPE: Person), a participating information extraction (IE) system is expected to output all correct triples [(S1, P1, O1), (S2, P2, O2) ...] mentioned in the sentence under the constraints of given schemas. A largest schema-based Chinese information extraction dataset is released in this competition. Precision, Recall and F1 score are used as the basic evaluation metrics to measure the performance of participating systems. From the example shown in Figure 1, we can notice that one entity can be involved in multiple triplets and entity spans have overlaps, which is the difficulties of this task. BERT for Feature Extraction BERT (Bidirectional Encoder Representations from Transformers) [1] is a new language representation model, which uses bidirectional transformers to pretrain a large unlabeled corpus, and fine-tunes the pre-trained model on other tasks. BERT has been widely used and shows great improvement on various natural language processing tasks, e.g., word segmentation, named entity recognition, sentiment analysis, and question answering. We use BERT to extract contextual feature for each character instead of BiLSTM in the original work [15]. To further improve the performance, we optimize the pre-training process of BERT by introducing a semantic-enhanced task. Enhanced BERT Original google BERT is pre-trained using two unsupervised tasks, masked language model (MLM) and next sentence prediction (NSP). MLM task enables the model to capture the discriminative contextual feature. NSP task makes it possible to understand the relationship between sentence pairs, which is not directly captured by language modeling. We further design a semantic-enhanced task to enhance the performance of BERT. It incorporate previous sentence prediction and document level prediction. We pre-train BERT by combining MLM, NSP and the semantic-enhanced task together. Named Entity Recognition NER (Named Entity Recognition) is the first task in the joint multi-head selection model. It is usually formulated as a sequence labeling problem using the BIO (Beginning, Inside, Outside) encoding scheme. Since there are different entity types, the tags are extended to B-type, I-type and O. Linear-chain CRF [16] is widely used for sequence labeling in deep models. In our method, CRF is built on the top of BERT. Supposed y ∈ {B − type, I − type, O} is the label, score function s(X, i) yi is the output of BERT at i th character and b yi−1yi is trainable parameters, the probability of a possible label sequence is formalized as: P (Y |X) = n i=2 exp(s(X, i) yi + b yi−1yi ))) y n i=2 exp(s(X, i) y i + b y i−1 y i )))(1) By solving Eq 2 we can obtain the optimal sequence tags: Extra Corpus for NER Pretraining Previous works show that introducing extra data for distant supervised learning usually boost the model performance. Y * = argmaxP (Y |X)(2) For this task, we collect a large-scale Baidu Baike corpus (about 6 million sentences) for NER pre-training. As shown in figure 4, each sample contains the content and its title. These samples are auto-crawled so there is no actual entity label. We consider the title of each sample as a pseudo label and conduct NER pre-training using these data. Experimental results show that it improves performance. [15] use the entity tags as input to relation classification layer by learning label embeddings. As reported in their experiments, an improvement of 1∼2% F1 is achieved with the use of label embeddings. Their mechanism is hard label embedding because they use the CRF decoding results, which have two disadvantages. On one hand, the entity recognition results are not absolutely correct since they are predicted by the model during inference. The error from the entity tags may propagate to the relation classification branch and hurt the performance. On the other hand, CRF decoding process is based on the Viterbi Algorithm, which contains an argmax operation which is not differentiable. To solve this problem, we proposed soft label embedding, which takes the logits as input to preserve probability of each entity type. Suppose N is the logits dimension, i.e., the number of entity type, M is the label embedding matrix, then soft label embedding for i th character can be formalized as Eq 3: Soft Label Embedding h i = sof tmax(s(X, i)) · M N(3) Relation Classification as Multi-Head Selection We formulated the relation classification task as a multi-head selection problem, since each token in the sentence has multiple heads, i.e., multiple relations with other tokens. Soft label embedding of the i th token h i is feed into two separate fully connected layers to get the subject representation h s i and object representation h o i . Given the i th token (h s i , h o i ) and the j th token (h s j , h o j ) , our task is to predict their relation: r i,j = f (h s i , h o j ), r j,i = f (h s j , h o i )(4) where f (·) means neural network, r i,j is the relation when the i th token is subject and the j th token is object, r j,i is the relation when the j th token is subject and the i th token is object. Since the same entity pair have multiple relations, we adopt multi-sigmoid layer for the relation prediction. We minimize the cross-entropy loss L rel during training: L rel = K i=0 K j=0 N LL(r i,j , y i,j )(5) where K is the sequence length and y i,j is ground truth relation label. Global Relation Prediction Relation classification is of entity pairs level in the original multi-head selection framework. We introduce an auxiliary sentencelevel relation classification prediction task to guide the feature learning process. As shown in figure 3, the final hidden state of the first token [CLS] is taken to obtain a fixed-dimensional pooled representation of the input sequence. The hidden state is then feed into a multi-sigmoid layer for classification. In conclusion, our model is trained using the combined loss: L = L ner + L rel + L global rel(6) Model Ensemble Ensemble learning is an effective method to further improve performance. It is widely used in data mining and machine learning competitions. The basic idea is to combine the decisions from multiple models to improve the overall performance. In this work, we combine four variant multi-head selection models by learning an XGBoost [18] binary classification model on the development set. Each triplet generated by the base model is treated as a sample. We then carefully design 200-dimensional features for each sample. Take several important features for example: · the probability distribution of the entity pair · the probability distribution of sentence level · whether the triplet appear in the training set · the number of predicted entities, triples, relations of the given sentence · whether the entity boundary is consistent with the word segmentation results · semantic feature. We contact the sentence and the triplet to train an NLI model, hard negative triplets are constructed to help NLI model capture semantic feature. Experiments Experimental Settings All experiments are implemented on the hardware with Intel(R) Xeon(R) CPU E5-2682 v4 @ 2.50GHz and NVIDIA Tesla P100. Dataset and evaluation metrics We evaluate our method on the SKE dataset used in this competition, which is the largest schema-based Chinese information extraction dataset in the industry, containing more than 430,000 SPO triples in over 210,000 real-world Chinese sentences, bounded by a pre-specified schema with 50 types of predicates. All sentences in SKE Dataset are extracted from Baidu Baike and Baidu News Feeds. The dataset is divided into a training set (170k sentences), a development set (20k sentences) and a testing set (20k sentences). The training set and the development set are to be used for training and are available for free download. The test set is divided into two parts, the test set 1 is available for self-verification, the test set 2 is released one week before the end of the competition and used for the final evaluation. Hyperparameters The max sequence length is set to 128, the number of fully connected layer of relation classification branch is set to 2, and that of global relation branch is set to 1. During training, we use Adam with the learning rate of 2e-5, dropout probability of 0.1. This model converges in 3 epoch. Preprocessing All uppercase letters are converted to lowercase letters. We use max sequence length 128 so sentences longer than 128 are split by punctuation. According to FAQ, entities in book title mark should be completely extracted. Because the annotation criteria in trainset are diverse, we revise the incomplete entities. To keep consistence, book title marks around the entities are removed. Postprocessing Our postprocessing mechanism is mainly based on the FAQ evaluation rules. After model prediction, we remove triplets whose entity-relation types are against the given schemas. For entities contained in book title mark, we complement them if they are incomplete. Date type entities are also complemented to the finest grain. These are implemented by regular expression matching. Note that entity related preprocessing and postprocessing are also performed on the development set to keep consistency with the test set, thus the change of development metric is reliable. Main Results Results on SKE dataset are presented in Table 1. The baseline model is based on the Google BERT, use hard label embedding and train on only SKE dataset without NER pretraining. As shown in table 1, the F1 score increase from 0.864 to 0.871 when combined with our enhanced BERT. NER pretraining using the extra corpus, soft label embedding and auxiliary sentence-level relation classification prediction also improve the F1 score. Combined all of these contributions, we achieve F1-score 0.876 with the single model on test set 1. Model Ensemble We select the following four variant model to further conduct model ensembling. The ensemble model is XGBoost binary classifier, which is very fast during training. Since the base models are trained on the training set, we perform cross-validation on development set, figure 5 shows the PR curve of the ensemble model. By model ensembling the F1 score increase from 0.876 to 0.892. · Google BERT + Soft Label Embedding + Global Relation Prediction · Enhanced BERT + Soft Label Embedding + Global Relation Prediction · Google BERT + Soft Label Embedding + Global Relation Prediction + NER Pretraining · Enhance BERT + Soft Label Embedding + Global Relation Prediction + NER Pretraining Case Study Two examples of our model fail to predict are shown in figure 6. For example 1, the triplet can not be drawn from the given sentence. However, the triplet is actually in the trainset. Our model may overfit to the trainset in this situation. For example 2, there is complicate family relationships mentioned in the sentence, which is too hard for the model to capture. To solve this problem, a more robust model should be proposed and we leave this as future work. Conclusion In this paper, we report our solution to the information extraction task in 2019 Language and Intelligence Challenge. We first analyze the problem and find that most entities are involved in multiple triplets. To solve this problem, we incorporate BERT into the multi-head selection framework for joint entityrelation extraction. Enhanced BERT pre-training, soft label embedding and NER pre-training are three main technologies we introduce to further improve the performance. Experimental results show that our method achieves competitive performance: F1 score 0.892 (1st place) on the test set 1 and F1 score 0.8924 (2nd place) on the test set 2.
1,958
1908.05666
2967081197
Distributed computing frameworks such as MapReduce are often used to process large computational jobs. They operate by partitioning each job into smaller tasks executed on different servers. The servers also need to exchange intermediate values to complete the computation. Experimental evidence suggests that this so-called Shuffle phase can be a significant part of the overall execution time for several classes of jobs. Prior work has demonstrated a natural tradeoff between computation and communication whereby running redundant copies of jobs can reduce the Shuffle traffic load, thereby leading to reduced overall execution times. For a single job, the main drawback of this approach is that it requires the original job to be split into a number of files that grows exponentially in the system parameters. When extended to multiple jobs (with specific function types), these techniques suffer from a limitation of a similar flavor, i.e., they require an exponentially large number of jobs to be executed. In practical scenarios, these requirements can significantly reduce the promised gains of the method. In this work, we show that a class of combinatorial structures called resolvable designs can be used to develop efficient coded distributed computing schemes for both the single and multiple job scenarios considered in prior work. We present both theoretical analysis and exhaustive experimental results (on Amazon EC2 clusters) that demonstrate the performance advantages of our method. For the single and multiple job cases, we obtain speed-ups of 4.69x (and 2.6x over prior work) and 4.31x over the baseline approach, respectively.
The work of @cite_22 introduced ShuffleWatcher", a MapReduce scheduler that reduces throughput and job completion time. The scheme replicates Map tasks and delays or elongates a job's communication time depending on the network load. Their technique also judiciously assigns Reduce tasks to workers based on the Map assignment. Other related work on this topic has been published in @cite_3 which considers a model of MapReduce executed on a multi-core machine and proposes a topology-aware architecture to expedite data shuffling. The authors of @cite_29 present an algorithm that finds the optimal placement and jointly optimizes Map and Shuffle time.
{ "abstract": [ "The data placement strategy greatly affects the efficiency of MapReduce. The current strategy only takes the map phase into account to optimize the map time. But the ignored shuffle phase may increase the total running time significantly in many jobs. We propose a new data placement strategy, named OPTAS, which optimizes both the map and shuffle phases to reduce their total time. However, the huge search space makes it difficult to find out an optimal data placement instance (DPI) rapidly. To address this problem, an algorithm is proposed which can prune most of the search space and find out an optimal result quickly. The search space firstly is segmented in ascending order according to the potential map time. Within each segment, we propose an efficient method to construct a local optimal DPI with the minimal total time of both the map and shuffle phases. To find the global optimal DPI, we scan the local optimal DPIs in order. We have proven that the global optimal DPI can be found as the first local optimal DPI whose total time stops decreasing, thus further pruning the search space. In practice, we find that at most fourteen local optimal DPIs are scanned in tens of thousands of segments with the pruning strategy. Extensive experiments with real trace data verify not only the theoretic analysis of our pruning strategy and construction method but also the optimality of OPTAS. The best improvements obtained in our experiments can be over 40 compared with the existing strategy used by MapReduce.", "MapReduce clusters are usually multi-tenant (i.e., shared among multiple users and jobs) for improving cost and utilization. The performance of jobs in a multitenant MapReduce cluster is greatly impacted by the all-Map-to-all-Reduce communication, or Shuffle, which saturates the cluster's hard-to-scale network bisection bandwidth. Previous schedulers optimize Map input locality but do not consider the Shuffle, which is often the dominant source of traffic in MapReduce clusters. We propose ShuffleWatcher, a new multitenant MapReduce scheduler that shapes and reduces Shuffle traffic to improve cluster performance (throughput and job turn-around times), while operating within specified fairness constraints. ShuffleWatcher employs three key techniques. First, it curbs intra-job Map-Shuffle concurrency to shape Shuffle traffic by delaying or elongating a job's Shuffle based on the network load. Second, it exploits the reduced intra-job concurrency and the flexibility engendered by the replication of Map input data for fault tolerance to preferentially assign a job's Map tasks to localize the Map output to as few nodes as possible. Third, it exploits localized Map output and delayed Shuffle to reduce the Shuffle traffic by preferentially assigning a job's Reduce tasks to the nodes containing its Map output. ShuffleWatcher leverages opportunities that are unique to multi-tenancy, such overlapping Map with Shuffle across jobs rather than within a job, and trading-off intra-job concurrency for reduced Shuffle traffic. On a 100-node Amazon EC2 cluster running Hadoop, ShuffleWatcher improves cluster throughput by 39-46 and job turn-around times by 27-32 over three state-of-the-art schedulers.", "Nowadays, MapReduce has become very popular in many applications, such as high performance computing. It typically consists of map, shuffle and reduce phases. As an important one among these three phases, data shuffling usually accounts for a large portion of the entire running time of MapReduce jobs. MapReduce was originally designed in scale-out architecture with inexpensive commodity machines. However, in recent years, scale-up computing architecture for MapReduce jobs has been developed. Some studies indicate that in certain cases, a powerful scale-up machine can outperform a scale-out cluster with multiple machines. With multi-processor, multi-core design connected via NUMAlink and large shared memories, NUMA architecture provides a powerful scale-up computing capability. Compared with Ethernet connection and TCP IP network, NUMAlink has a much faster data transfer speed which can greatly expedite the data shuffling of MapReduce jobs. The impact of NUMAlink on data shuffling in NUMA scale-up architecture has not been fully investigated in previous work. In this paper, we ignore the computing power (i.e., map and reduce phases) of MapReduce, but focus on the optimization of data shuffling phase in MapReduce framework in NUMA machine. We concentrate on the various bandwidth capacities of NUMAlink(s) among different memory locations to fully utilize the network. We investigate the NUMAlink topology using SGI UV 2000 as an example and propose a topology-aware reducer placement algorithm to speed up the data shuffling phase. In addition, we extend our approach to a larger computing environment with multiple NUMA machines, and design a reducer placement scheme to expedite the inter-NUMA machine data shuffling. Experiments results show that data shuffling time can be greatly reduced in NUMA architecture with our solution." ], "cite_N": [ "@cite_29", "@cite_22", "@cite_3" ], "mid": [ "2044688447", "1471136644", "2522947711" ] }
Resolvable Designs for Speeding up Distributed Computing
0
1908.05666
2967081197
Distributed computing frameworks such as MapReduce are often used to process large computational jobs. They operate by partitioning each job into smaller tasks executed on different servers. The servers also need to exchange intermediate values to complete the computation. Experimental evidence suggests that this so-called Shuffle phase can be a significant part of the overall execution time for several classes of jobs. Prior work has demonstrated a natural tradeoff between computation and communication whereby running redundant copies of jobs can reduce the Shuffle traffic load, thereby leading to reduced overall execution times. For a single job, the main drawback of this approach is that it requires the original job to be split into a number of files that grows exponentially in the system parameters. When extended to multiple jobs (with specific function types), these techniques suffer from a limitation of a similar flavor, i.e., they require an exponentially large number of jobs to be executed. In practical scenarios, these requirements can significantly reduce the promised gains of the method. In this work, we show that a class of combinatorial structures called resolvable designs can be used to develop efficient coded distributed computing schemes for both the single and multiple job scenarios considered in prior work. We present both theoretical analysis and exhaustive experimental results (on Amazon EC2 clusters) that demonstrate the performance advantages of our method. For the single and multiple job cases, we obtain speed-ups of 4.69x (and 2.6x over prior work) and 4.31x over the baseline approach, respectively.
The recent work of @cite_7 introduces a scheme to handle the case when each Reduce function is computed by @math workers by utilizing a hybercube structure which controls the allocation of Map and Reduce tasks. Their work is motivated by distributed applications that require multiple rounds of Map and Reduce computations, where the Reduce results of the previous round serve as the inputs to the Map functions of the next one.
{ "abstract": [ "Coded distributed computing introduced by in 2015 is an efficient approach to trade computing power to reduce the communication load in general distributed computing frameworks such as MapReduce. In particular, show that increasing the computation load in the Map phase by a factor of @math can create coded multicasting opportunities to reduce the communication load in the Reduce phase by the same factor. However, there are two major limitations in practice. First, it requires an exponentially large number of input files (data batches) when the number of computing nodes gets large. Second, it forces every @math computing nodes to compute one Map function, which leads to a large number of Map functions required to achieve the promised gain. In this paper, we make an attempt to overcome these two limitations by proposing a novel coded distributed computing approach based on a combinatorial design. We demonstrate that when the number of computing nodes becomes large, 1) the proposed approach requires an exponentially less number of input files; 2) the required number of Map functions is also reduced exponentially. Meanwhile, the resulting computation-communication trade-off maintains the multiplicative gain compared to conventional uncoded unicast and achieves the information theoretic lower bound asymmetrically for some system parameters." ], "cite_N": [ "@cite_7" ], "mid": [ "2964333472" ] }
Resolvable Designs for Speeding up Distributed Computing
0
1908.05666
2967081197
Distributed computing frameworks such as MapReduce are often used to process large computational jobs. They operate by partitioning each job into smaller tasks executed on different servers. The servers also need to exchange intermediate values to complete the computation. Experimental evidence suggests that this so-called Shuffle phase can be a significant part of the overall execution time for several classes of jobs. Prior work has demonstrated a natural tradeoff between computation and communication whereby running redundant copies of jobs can reduce the Shuffle traffic load, thereby leading to reduced overall execution times. For a single job, the main drawback of this approach is that it requires the original job to be split into a number of files that grows exponentially in the system parameters. When extended to multiple jobs (with specific function types), these techniques suffer from a limitation of a similar flavor, i.e., they require an exponentially large number of jobs to be executed. In practical scenarios, these requirements can significantly reduce the promised gains of the method. In this work, we show that a class of combinatorial structures called resolvable designs can be used to develop efficient coded distributed computing schemes for both the single and multiple job scenarios considered in prior work. We present both theoretical analysis and exhaustive experimental results (on Amazon EC2 clusters) that demonstrate the performance advantages of our method. For the single and multiple job cases, we obtain speed-ups of 4.69x (and 2.6x over prior work) and 4.31x over the baseline approach, respectively.
Another approach that re-examines the computation - communication tradeoff from an alternate viewpoint has been investigated in @cite_9 . In this case, the assumption is that a server does not need to process all locally available files and storage constraints do not necessarily imply computation constraints. A lower bound on the was derived along with a heuristic scheme to achieve it in some cases.
{ "abstract": [ "In this paper, we revisit the communication vs. distributed computing trade-off, studied within the framework of MapReduce in [1]. An implicit assumption in the aforementioned work is that each server performs all possible computations on all the files stored in its memory. Our starting observation is that, if servers can compute only the intermediate values they need, then storage constraints do not directly imply computation constraints. We examine how this affects the communication-computation trade-off and suggest that the trade-off be studied with a predetermined storage constraint. We then proceed to examine the case where servers need to perform computationally intensive tasks, and may not have sufficient time to perform all computations required by the scheme in [1]. Given a threshold that limits the computational load, we derive a lower bound on the associated communication load, and propose a heuristic scheme that achieves in some cases the lower bound." ], "cite_N": [ "@cite_9" ], "mid": [ "2617602628" ] }
Resolvable Designs for Speeding up Distributed Computing
0
1908.05666
2967081197
Distributed computing frameworks such as MapReduce are often used to process large computational jobs. They operate by partitioning each job into smaller tasks executed on different servers. The servers also need to exchange intermediate values to complete the computation. Experimental evidence suggests that this so-called Shuffle phase can be a significant part of the overall execution time for several classes of jobs. Prior work has demonstrated a natural tradeoff between computation and communication whereby running redundant copies of jobs can reduce the Shuffle traffic load, thereby leading to reduced overall execution times. For a single job, the main drawback of this approach is that it requires the original job to be split into a number of files that grows exponentially in the system parameters. When extended to multiple jobs (with specific function types), these techniques suffer from a limitation of a similar flavor, i.e., they require an exponentially large number of jobs to be executed. In practical scenarios, these requirements can significantly reduce the promised gains of the method. In this work, we show that a class of combinatorial structures called resolvable designs can be used to develop efficient coded distributed computing schemes for both the single and multiple job scenarios considered in prior work. We present both theoretical analysis and exhaustive experimental results (on Amazon EC2 clusters) that demonstrate the performance advantages of our method. For the single and multiple job cases, we obtain speed-ups of 4.69x (and 2.6x over prior work) and 4.31x over the baseline approach, respectively.
In @cite_24 , the authors propose a scheme which gives each server access to a random subset of the input files and not all Reduce functions depend on the entire data set. This fact changes the policy according to which we decide which server computes which function.
{ "abstract": [ "In wireless distributed computing, networked nodes perform intermediate computations over data placed in their memory and exchange these intermediate values to calculate function values. In this paper we consider an asymmetric setting where each node has access to a random subset of the data, i.e., we cannot control the data placement. The paper makes a simple point: we can realize significant benefits if we are allowed to be “flexible”, and decide which node computes which function, in our system. We make this argument in the case where each function depends on only two of the data messages, as is the case in similarity searches. We establish a percolation in the behaviour of the system, where, depending on the amount of observed data, by being flexible, we may need no communication at all." ], "cite_N": [ "@cite_24" ], "mid": [ "2963372842" ] }
Resolvable Designs for Speeding up Distributed Computing
0
1908.05666
2967081197
Distributed computing frameworks such as MapReduce are often used to process large computational jobs. They operate by partitioning each job into smaller tasks executed on different servers. The servers also need to exchange intermediate values to complete the computation. Experimental evidence suggests that this so-called Shuffle phase can be a significant part of the overall execution time for several classes of jobs. Prior work has demonstrated a natural tradeoff between computation and communication whereby running redundant copies of jobs can reduce the Shuffle traffic load, thereby leading to reduced overall execution times. For a single job, the main drawback of this approach is that it requires the original job to be split into a number of files that grows exponentially in the system parameters. When extended to multiple jobs (with specific function types), these techniques suffer from a limitation of a similar flavor, i.e., they require an exponentially large number of jobs to be executed. In practical scenarios, these requirements can significantly reduce the promised gains of the method. In this work, we show that a class of combinatorial structures called resolvable designs can be used to develop efficient coded distributed computing schemes for both the single and multiple job scenarios considered in prior work. We present both theoretical analysis and exhaustive experimental results (on Amazon EC2 clusters) that demonstrate the performance advantages of our method. For the single and multiple job cases, we obtain speed-ups of 4.69x (and 2.6x over prior work) and 4.31x over the baseline approach, respectively.
As discussed above both @cite_16 and @cite_5 require a certain problem dimension to be very large. In particular, @cite_5 considers a single job and requires it to be split into a number of tasks that grows exponentially in the problem parameters. On the other hand @cite_16 considers functions that can be aggregated but requires the number of jobs being processed simultaneously to grow exponentially. Our work builds on the initial work in @cite_21 and @cite_2 and makes the following contributions.
{ "abstract": [ "How can we optimally trade extra computing power to reduce the communication load in distributed computing? We answer this question by characterizing a fundamental tradeoff between computation and communication in distributed computing, i.e., the two are inversely proportional to each other. More specifically, a general distributed computing framework, motivated by commonly used structures like MapReduce, is considered, where the overall computation is decomposed into computing a set of “Map” and “Reduce” functions distributedly across multiple computing nodes. A coded scheme, named “coded distributed computing” (CDC), is proposed to demonstrate that increasing the computation load of the Map functions by a factor of @math (i.e., evaluating each function at @math carefully chosen nodes) can create novel coding opportunities that reduce the communication load by the same factor. An information-theoretic lower bound on the communication load is also provided, which matches the communication load achieved by the CDC scheme. As a result, the optimal computation-communication tradeoff in distributed computing is exactly characterized. Finally, the coding techniques of CDC is applied to the Hadoop TeraSort benchmark to develop a novel CodedTeraSort algorithm, which is empirically demonstrated to speed up the overall job execution by @math – @math , for typical settings of interest.", "Communication overhead is one of the major performance bottlenecks in large-scale distributed computing systems, especially for machine learning applications. Conventionally, compression techniques are used to reduce the load of communication by combining intermediate results of the same computation task as much as possible. Recently, via the development of coded distributed computing (CDC), it has been shown that it is possible to code across intermediate results of different tasks to further reduce communication. We propose a new scheme, named compressed coded distributed computing (in short, compressed CDC), which jointly exploits these two techniques (i.e., combining intermediate results of the same computation and coding across intermediate results of different computations) to significantly reduce the communication load for computations with linear aggregation of intermediate results in the final stage that are prevalent in machine learning (e.g., distributed training where partial gradients are computed distributedly and then averaged in the final stage). In particular, compressed CDC first compresses combines several intermediate results for a single computation, and then utilizes multiple such combined packets to create a coded multicast packet that is simultaneously useful for multiple computations. We characterize the achievable communication load of compressed CDC and show that it substantially outperforms both combining methods and CDC scheme.", "Large scale clusters running MapReduce, Spark etc. routinely process data that are on the orders of petabytes or more. The philosophy in these methods is to split the overall job into smaller tasks that are executed on different servers; this is called the map phase. This is followed by a data shuffling phase where appropriate data is exchanged between the servers. The final reduce phase, completes the computation. Prior work has explored a mechanism for reducing the overall execution time by operating on a computation vs. communication tradeoff. Specifically, the idea is to run redundant copies of map tasks that are placed on judiciously chosen servers. The shuffle phase exploits the location of the nodes and utilizes coded transmission. The main drawback of this approach is that it requires the original job to be split into a number of map tasks that grows exponentially in the system parameters. This is problematic, as we demonstrate that splitting jobs too finely can in fact adversely affect the overall execution time. In this work we show that one can simultaneously obtain low communication loads while ensuring that jobs do not need to be split too finely. Our approach uncovers a deep relationship between this problem and a class of combinatorial structures called resolvable designs. We present experimental results obtained on Amazon EC2 clusters for a widely known distributed algorithm, namely TeraSort. We obtain over 4.69x improvement in speedup over the baseline approach and more than 2.6x over current state of the art.", "" ], "cite_N": [ "@cite_5", "@cite_16", "@cite_21", "@cite_2" ], "mid": [ "2757498728", "2964183069", "2963717115", "" ] }
Resolvable Designs for Speeding up Distributed Computing
0
1908.05498
2967155990
Detecting scene text of arbitrary shapes has been a challenging task over the past years. In this paper, we propose a novel segmentation-based text detector, namely SAST, which employs a context attended multi-task learning framework based on a Fully Convolutional Network (FCN) to learn various geometric properties for the reconstruction of polygonal representation of text regions. Taking sequential characteristics of text into consideration, a Context Attention Block is introduced to capture long-range dependencies of pixel information to obtain a more reliable segmentation. In post-processing, a Point-to-Quad assignment method is proposed to cluster pixels into text instances by integrating both high-level object knowledge and low-level pixel information in a single shot. Moreover, the polygonal representation of arbitrarily-shaped text can be extracted with the proposed geometric properties much more effectively. Experiments on several benchmarks, including ICDAR2015, ICDAR2017-MLT, SCUT-CTW1500, and Total-Text, demonstrate that SAST achieves better or comparable performance in terms of accuracy. Furthermore, the proposed algorithm runs at 27.63 FPS on SCUT-CTW1500 with a Hmean of 81.0 on a single NVIDIA Titan Xp graphics card, surpassing most of the existing segmentation-based methods.
Scene text is regarded as a special type of object, several methods @cite_10 @cite_43 @cite_57 @cite_54 @cite_20 @cite_45 are based on Faster R-CNN @cite_4 , SSD @cite_32 and DenseBox @cite_16 , which generates text bounding boxes by regressing coordinates of boxes directly. TextBoxes @cite_43 and RRD @cite_44 adopt SSD as a base detector and adjust the anchor ratios and convolution kernel size to handle variation of aspect ratios of text instances. @cite_10 and EAST @cite_3 perform direct regression to determine vertex coordinates of quadrilateral text boundaries in a per-pixel manner without using anchors and proposals, and conduct the Non-Max Suppression (NMS) to get the final detection results. RRPN @cite_23 generates inclined proposals with text orientation angle information and propose Rotation Region-of-Interest (RRoI) pooling layer to detect arbitrary-oriented text. Limited by the receptive field of CNNs and the relatively simple representations like rectangle bounding box or quadrangle adopted to describe text, detection-based methods may fall short when dealing with more challenging text instances, such as extremely long text and arbitrarily-shaped text.
{ "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.", "", "This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.", "Text in natural images is of arbitrary orientations, requiring detection in terms of oriented bounding boxes. Normally, a multi-oriented text detector often involves two key tasks: 1) text presence detection, which is a classification problem disregarding text orientation; 2) oriented bounding box regression, which concerns about text orientation. Previous methods rely on shared features for both tasks, resulting in degraded performance due to the incompatibility of the two tasks. To address this issue, we propose to perform classification and regression on features of different characteristics, extracted by two network branches of different designs. Concretely, the regression branch extracts rotation-sensitive features by actively rotating the convolutional filters, while the classification branch extracts rotation-invariant features by pooling the rotation-sensitive features. The proposed method named Rotation-sensitive Regression Detector (RRD) achieves state-of-the-art performance on several oriented scene text benchmark datasets, including ICDAR 2015, MSRA-TD500, RCTW-17, and COCO-Text. Furthermore, RRD achieves a significant improvement on a ship collection dataset, demonstrating its generality on oriented object detection.", "", "This paper introduces a novel rotation-based framework for arbitrary-oriented text detection in natural scene images. We present the Rotation Region Proposal Networks , which are designed to generate inclined proposals with text orientation angle information. The angle information is then adapted for bounding box regression to make the proposals more accurately fit into the text region in terms of the orientation. The Rotation Region-of-Interest pooling layer is proposed to project arbitrary-oriented proposals to a feature map for a text region classifier. The whole framework is built upon a region-proposal-based architecture, which ensures the computational efficiency of the arbitrary-oriented text detection compared with previous text detection systems. We conduct experiments using the rotation-based framework on three real-world scene text detection datasets and demonstrate its superiority in terms of effectiveness and efficiency over previous approaches.", "How can a single fully convolutional neural network (FCN) perform on object detection? We introduce DenseBox, a unified end-to-end FCN framework that directly predicts bounding boxes and object class confidences through all locations and scales of an image. Our contribution is two-fold. First, we show that a single FCN, if designed and optimized carefully, can detect multiple different objects extremely accurately and efficiently. Second, we show that when incorporating with landmark localization during multi-task learning, DenseBox further improves object detection accuray. We present experimental results on public benchmark datasets including MALF face detection and KITTI car detection, that indicate our DenseBox is the state-of-the-art system for detecting challenging objects such as faces and cars.", "In this paper, we first provide a new perspective to divide existing high performance object detection methods into direct and indirect regressions. Direct regression performs boundary regression by predicting the offsets from a given point, while indirect regression predicts the offsets from some bounding box proposals. Then we analyze the drawbacks of the indirect regression, which the recent state-of-the-art detection structures like Faster-RCNN and SSD follows, for multi-oriented scene text detection, and point out the potential superiority of direct regression. To verify this point of view, we propose a deep direct regression based method for multi-oriented scene text detection. Our detection framework is simple and effective with a fully convolutional network and one-step post processing. The fully convolutional network is optimized in an end-to-end way and has bi-task outputs where one is pixel-wise classification between text and non-text, and the other is direct regression to determine the vertex coordinates of quadrilateral text boundaries. The proposed method is particularly beneficial for localizing incidental scene texts. On the ICDAR2015 Incidental Scene Text benchmark, our method achieves the F1-measure of 81 , which is a new state-of-the-art and significantly outperforms previous approaches. On other standard datasets with focused scene texts, our method also reaches the state-of-the-art performance.", "" ], "cite_N": [ "@cite_4", "@cite_54", "@cite_32", "@cite_3", "@cite_57", "@cite_43", "@cite_44", "@cite_45", "@cite_23", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "2613718673", "", "2193145675", "2605982830", "", "2962773189", "2964294787", "", "2593539516", "2129987527", "2952662639", "" ] }
A Single-Shot Arbitrarily-Shaped Text Detector based on Context Attended Multi-Task Learning ACM Reference Format
Recently, scene text reading has attracted extensive attention in both academia and industry for its numerous applications, such as scene understanding, image and video retrieval, and robot navigation. As the prerequisite in textual information extraction and understanding, text detection is of great importance. Thanks to the surge of deep neural networks, various convolutional neural network (CNN) based methods have been proposed to detect scene text, continuously refreshing the performance records on standard benchmarks [1,15,28,42]. However, text detection in the wild is still a challenging task due to the significant variations in size, aspect ratios, orientations, languages, arbitrary shapes, and even the complex background. In this paper, we seek an effective and efficient detector for text of arbitrary shapes. To detect arbitrarily-shaped text, especially those in curved form, some segmentation-based approaches [23,35,37,44] formulated text detection as a semantic segmentation problem. They employ a fully convolutional network (FCN) [27] to predict text regions, and apply several post-processing steps such as connected component analysis to extract final geometric representation of scene text. Due to the lack of global context information, there are two common challenges for segmentation-based text detectors, as demonstrated in Fig. 1, including: (1) Lying close to each other, text instances are difficult to be separated via semantic segmentation; (2) Long text instances tend to be fragmented easily, especially when character spacing is far or the background is complex, such as the effect of strong illumination. In addition, most segmentation-based detectors have to output large-resolution prediction to precisely describe text contours, thus suffer from time-consuming and redundant postprocessing steps. Some instance segmentation methods [4,7,45] attempt to embed high-level object knowledge or non-local information into the network to alleviate the similar problems described above. Among them, Mask-RCNN [7], a proposal-based segmentation method that cascades detection task (i.e., RPN [29]) and segmentation task by RoIAlign [7], has achieved better performance than those proposal-free methods by a large margin. Recently, some similar ideas [14,24,39] have been introduced to settle the problem of detecting text of arbitrary shapes. However, they are all facing a common challenge that it takes much more time when the number of valid text proposals increases, due to the large number of overlapping computations in segmentation, especially in the case that valid proposals are dense. In contrast, our approach is based on a single-shot view and efficient multi-task mechanism. Inspired by recent works [16,22,33] in general semantic instance segmentation, we aim to design a segmentation-based Single-shot Arbitrarily-Shaped Text detector (SAST), which integrates both the high-level object knowledge and low-level pixel information in a single shot and detects scene text of arbitrary shapes with high accuracy and efficiency. Employing a FCN [27] model, various geometric properties of text regions, including text center line (TCL), text border offset (TBO), text center offset (TCO), and text vertex offset (TVO), are designed to learn simultaneously under a multi-task learning formulation. In addition to skip connections, a Context Attention Block (CAB) is introduced into the architecture to aggregate contextual information for feature augmentation. To address the problems illustrated in Fig. 1, we propose a point-toquad method for text instance segmentation, which assigns labels to pixels by combining high-level object knowledge from TVO and TCO maps. After clustering TCL map into text instances, more precise polygonal representations of arbitrarily-shaped text are then reconstructed based on TBO maps. Experiments on public datasets demonstrate that the proposed method achieves better or comparable performance in terms of both accuracy and efficiency. The contribution of this paper are three-fold: • We propose a single-shot text detector based on multi-task learning for text of arbitrary shapes including multi-oriented, multilingual, and curved scene text, which is efficient enough for some real-time applications. • The Context Attention Block aggregates the contextual information to augment the feature representation without too much extra calculation cost. • The point-to-quad assignment is robust and effective to separate text instance and alleviate the problem of fragments, which is better than connected component analysis. METHODOLOGY In this section, we will describe our SAST framework for detecting scene text of arbitrary shapes in details. Arbitrary Shape Representation The bounding boxes, rotated rectangles, and quadrilaterals are used as classical representations in most detection-based text detectors, which fails to precisely describe the text instances of arbitrary shapes, as shown in Fig. 1 (b). The segmentation-based methods formulate the detection of arbitrarily-shaped text as a binary segmentation problem. Most of them directly extracted the contours of instance mask as the representation of text, which is easily affected by the completeness and consistency of segmentation. However, PSENet [35] and TextSnake [23] attempted to progressively reconstruct the polygonal representation of detected text based on a shrunk text region, of which the post-processing is complex and tended to be slow. Inspired by those efforts, we aim to design an effective method for arbitrarily-shaped text representation. In this paper, we extract the center line of text region (TCL map) and reconstruct the precise shape representation of text instances with a regressed geometry property, i.e. TBO, which indicates the offset between each pixel in TCL map and corresponding point pair in upper and lower edge of its text region. More specifically, as depicted in Fig. 2, the representation strategy consists of two steps: text center point sampling and border point extraction. Firstly, we sample n points at equidistance intervals from left to right on the center line region of text instance. By taking a further operation, we can determine the corresponding border point pairs based on the sampled center line point with the information provided by TBO maps in the same location. By linking all the border points clockwise, we can obtain a complete text polygon representation. Instead of setting n to a fixed number, we assign it by the ratio of center line length to the average of length of border offset pairs adaptively. Several experiments on curved text datasets prove that our method is efficient and flexible for arbitrarily-shaped text instances. Pipeline The network architecture of FCN-based text detectors are limited to the local receptive fields and short-range contextual information, and makes it struggling to segment some challenging text instances. Thus, we design a Context Attention Block to integrate the longrange dependencies of pixels to obtain a more representative feature. As a substitute for the Connected Component Analysis, we also propose Point-to-Quad Assignment to cluster the pixels in TCL map into text instances, where we use TCL and TVO maps to restore the minimum quadrilateral bounding boxes of text instances as high-level information. An overview of our framework is depicted in Fig. 3. It consists of three parts, including a stem network, multi-task branches, and a post-processing part. The stem network is based on ResNet-50 [8] with FPN [19] and CABs to produce context-enhanced representation. The TCL, TCO, TVO, and TBO maps are predicted for each text region as a multi-task problem. In the post-processing, we segment text instances by point-to-quad assignment. Concretely, similar to EAST [47], the TVO map regresses the four vertices of bounding quadrangle of text region directly, and the detection results is considered as high-level object knowledge. For each pixel in TCL map, a corresponding offset vector from TCO map will point to a low-level center which the pixel belongs to. Computing the distance between lower-level center and high-level object centers of the detected bounding quadrangle, pixels in the TCL map will be grouped into several text instances. In contrast to the connected component analysis, it takes high-level object knowledge into account, and is proved to be more efficient. More details about the mechanism of point-to-quad assignment will be discussed in this Section 3.4. We sample a adaptive number of points in the center line of each text instance, calculate corresponding points in upper and lower borders with the help of TBO map, and reconstruct the representation of arbitrarily-shaped scene text finally. Shape Representation Network Architecture In this paper, we employ ResNet-50 as the backbone network with the additional fully-connected layers removed. With different levels of feature map from the stem network gradually merged three-times in the FPN manner, a fused feature map X is produced at 1/4 size of the input images. We serially stack two CABs behind to capture rich contextual information. Adding four branches behind the contextenhanced feature maps X ′′ , the TCL and other geometric maps are predicted in parallel, where we adopt a 1 × 1 convolution layer with the number of output channel set to {1, 2, 8, 4} for TCL, TCO, TVO, and TBO map respectively. It is worth mentioning that all the output channels of convolution layers in the FPN is set to 128 directly, regardless of whether the kernel size is 1 or 3. Context Attention Block. The segmentation results of FCN and the post-processing steps depend mainly on local information. The proposed CAB utilizes a self-attention mechanism [34] to aggregate the contextual information to augment the feature representation, of which the details is demonstrated in Fig. 4. In order to alleviate the huge computational overhead caused by direct use of self-attention, CAB only considers the similarity between each location in feature map and others in the same horizontal or vertical column. The feature map X is the output of ResNet-50 backbone which is in size of N × H × W × C. To collect contextual information horizontally, we adopt three convolution layers behind X in parallel to get { f θ , f ϕ , f д } and reshape them into {N × H } × W × C, then multiply f ϕ by the transpose of f θ to get an attention map of size {N × H } × W × W , which is activated by a Sigmoid function. A horizontal contextual information enhanced feature, which is resized to N × H × W × C finally, is integrated by multiplication of f д and the attention map. It is slightly different to get the vertical contextual information that { f θ ,f ϕ ,f д } is transposed to {N × H } × C ×W at the beginning, as shown in cyan boxes in Fig. 4. Meanwhile, a short-cut path is used to preserve local features. Concatenating the horizontal contextual map, vertical contextual map, and short-cut map together and reducing channel number of X ′ with a 1 × 1 convolutional layer, the CAB aggregates long-range pixel-wise contextual information in both horizontal and vertical directions. Besides, the convolutional layers denoted by purple and cyan boxes share weights. By serially connecting two CABs, each pixel can finally capture long-range dependencies from all pixels, as depicted in the bottom of Fig. 4, leading to a more powerful context-enhanced feature map X ′′ , which also helps alleviate the problems caused by the limited receptive field when dealing with more challenging text instances, such as long text. Text Instance Segmentation For most proposal free text detector of arbitrary shapes, the morphological post-processing such as connected component analysis are adopted to achieve text instance segmentation, which do not explicitly incorporate high-level object knowledge and easily fail to detect complex scene text. In this section, we describe how to generate an text instance semantic segmentation with TCL, TCO and TVO maps with high-level object information. Point-to-Quad Assignment. As depicted in Fig. 3, the first step of text instance segmentation is detecting candidate text quadrangles based on TCL and TVO maps. Similar to EAST [47], we binarize Figure 4: Context Attention Blocks: a single CAB module aggregates pixel-wise contextual information both horizontally and vertically, and long-range dependencies from all pixels can be captured by serially connecting two CABs. CAB CAB X ′ ′′ T, Conv 1x1, R Conv 1x1, R Multiplication Concatenate Conv 1x1 {NxH} x W x C Short Cut, N x H x W x C T, {NxH} x C x W {NxH} x W x W R, N x H x W x C R, N x W x H x C {NxW} x H x H T, {NxW} x C x H X ′ {NxW} x H x C N x H x W x C R for "Reshape" T for "Transpose" N x H x W x C CAB the TCL map, whose pixel values are in the range of [0, 1], with a given threshold, and restore the corresponding quadrangle bounding boxes with the four vertex offsets provided by TVO map. Of course, NMS is adopted to suppress overlapping candidates. The final quadrangle candidates shown in Fig. 3 (b) can be considered to depend on high-level knowledge. The second and last step in text instance segmentation is clustering the responses of text region in the binarized TCL map into text instances. As Fig. 3 (c) shows, the TCO map is a pixel-wise prediction of offset vectors pointing to the center of bounding boxes which the pixels in the TCL map should belong to. With a strong assumption that pixels in TCL map belonging to the same text instance should point to the same object-level center, we cluster TCL map into several text instances by assigning the response pixel to the quadrangle boxes generated in the first step. Moreover, we do not care about whether predicted boxes in the first step are fully bounding text region in the input image, and the pixels outside of the predicted box will be mostly assigned to the corresponding text instances. Integrated with high-level object knowledge and low-level pixel information, the proposed post-processing clusters each pixel in TCL map to its best matching text instance efficiently, and can help to not only separate text instances that are close to each other, but also alleviate fragments when dealing with extremely long text. Label Generation and Training Objectives In this part, the generation of TCL, TCO, TVO, and TBO maps will be discussed. TCL is the shrunk version of text region, and it is an one channel segmentation map for text/non-text. The other label maps such as TCO, TVO, and TBO, are per-pixel offsets with reference to those pixels in TCL map. For each text instance, we calculate the center and four vertices of the minimum enclosing quadrangle from its annotation polygon, as depicted in Fig. 5 (c) (d). TCO map is the offset between pixels in TCL map and the Here are more details about the generation of a TBO map. For a quadrangle text annotation with vertices {V 1 , V 2 , V 3 , V 4 } in clockwise and V 1 is the top left vertex, as shown in Fig. 5 (b), the generation of TBO mainly contains two steps: first, we find a corresponding point pair on the top and bottom boundaries for each point in TCL map, than calculate the corresponding offset pair. With average slope of the upper side and lower side of quadrangle, the line cross a point P 0 in TCL map can be determined. And it is easy to directly calculate the intersection points {P 1 , P 2 } of the line in the left and right edges of bounding quadrangle with algebraic methods. A pair of corresponding points {P upper , P lower } for P 0 can be determined from: P 0 − P 1 P 2 − P 1 = P upper − V 1 V 2 − V 1 = P lower − V 4 V 3 − V 4 . In the second step, the offsets between P 0 and {P upper , P lower } can be easily determined. Polygons of more than four vertices are treated as a series of quadrangles connected together, and TBO of polygons can be generated gradually from quadrangles as described before. For non-TCL pixels, their corresponding geometry attributes are set to 0 for convenience. At the stage of training, the whole network is trained in an end-to-end manner, and the loss of the model can be formulated as: L tot al = λ 1 L tcl + λ 2 L tco + λ 3 L tvo + λ 4 L tbo , where L tcl , L tco ,L tvo and L tbo represent the loss of TCL, TCO, TVO, and TBO maps, and the first one is the binary segmentation loss while the other are regression loss. We train segmentation branch by minimizing the Dice loss [27], and the Smooth L 1 loss [5] is adopted for regression loss. The loss weights λ 1 , λ 2 , λ 3 , and λ 4 are a tradeoff between four tasks which are equally important in this work, so we determine a set of values {1.0, 0.5, 0.5, 1.0} by making the four loss gradient norms close in back-propagation. To compare the effectiveness of SAST with existing methods, we perform thorough experiments on four public text detection datasets, i.e., ICDAR 2015, ICDAR2017-MLT, SCUT-CTW1500 and Total-Text. Datasets The datasets used for the experiments in this paper are briefly introduced below. SynthText. The SynthText dataset [6] is composed of 800,000 natural images, on which text in random colors, fonts, scales, and orientations is rendered carefully to have a realistic look. We use the dataset with word-level labels to pre-train our model. ICDAR 2015. The ICDAR 2015 dataset [15] is collected for the ICDAR 2015 Robust Reading Competition, with 1,000 natural images for training and 500 for testing. The images are acquired using Google Glass and the text accidentally appear in the scene. All the text instances annotated with word-level quadrangles. ICDAR2017-MLT. The ICDAR2017-MLT [28] is a large scale multi-lingual text dataset, which includes 7,200 training images, 1,800 validation images and 9,000 test images. The dataset consists of multi-oriented and multi-lingual aspects of scene text. The text regions in ICDAR2017-MLT are also annotated by quadrangles. SCUT-CTW1500. The SCUT-CTW1500 [42] is a challenging dataset for curved text detection. It consists of 1,000 training images and 500 test images, and text instances are largely in English and Chinese. Different from traditional datasets, the text instances in SCUT-CTW1500 are labelled by polygons with 14 vertices. Total-Text. The Total-Text [1] is another curved text benchmark, which consists of 1,255 training images and 300 testing images with more than 3 different text orientations: Horizontal, Multi-Oriented, and Curved. The annotations are labelled in word-level. Evaluation Metrics. The performance on ICDAR2015, Total-Text, SCUT-CTW1500, and ICDAR2017-MLT is evaluated using the protocols provided in [1,15,28,42], respectively. Implementation Details Training. ResNet-50 is used as the network backbone with pretrained weight on ImageNet [3]. The skip-connection is in FPN fashion with output channel numbers of the convolutional layers set to 128 and the final output is at 1/4 size of input images. All upsample operators are the bilinear interpolation and the classification branch is activated with sigmoid while the regression branches, i.e. TCO, TVO, and TBO maps, is the output of the last convolution layer directly. The training process is divided into two steps, i.e., the warming-up and fine-tuning steps. In the warming-up step, we apply Adam optimizer to train our model with learning rate 1e-4, and the learning rate decay factor is 0.94 on the SynthText dataset. In the fine-tuning step, the learning rate is re-initiated to 1e-4 and the model is tuned on ICDAR 2015, ICDAR2017-MLT, SCUT-CTW1500 and Total-Text. All the experiments are performed on a workstation with the following configuration, CPU: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz x16; GPU: NVIDIA TITAN Xp ×4; RAM: 64GB. During the training time, we set the batch size to 8 per GPU in parallel. Data Augmentation. We randomly crop the text image regions, then resize and pad them to 512 × 512. Specially for curved polygon labeled datasets, we crop images without crossing text instances to avoid the destruction of polygon annotations. The cropped image regions will be rotated randomly in 4 directions (0 • , 90 • , 180 • , and 270 • ) and standardized by subtracting the RGB mean value of ImageNet dataset. The text region, which is marked as "DO NOT CARE" or its minimum length of edges is less than 8 pixels, will be ignored in the training process. Testing. In inference phase, unless otherwise stated, we set the longer side to 1536 for single-scale testing, and to 512, 768, 1536, and 2048 for multi-scale testing, while keeping the aspect ratio unchanged. A specified range is assigned to each testing scale and detections from different scales are combined using NMS, which is inspired by SNIP [31]. Ablation Study We conduct several ablation experiments to analyze SAST. The details are discussed as follows. The Effectiveness of TBO, TCO and TVO. To verify the efficiency of Text Instance Segmentation Module (TVO and TCO maps) and Arbitrary Shape Representation Module (TBO map) in SAST, we conduct several experiments with the following configurations: 1) TCL + CC + Expand: It is a naive way to predict center region of text, use connected component analysis to achieve text instance segmentation, and expand the contours of connected components by a shrinking rate as the final text geometric representation. 2) TCL + CC + TBO: Instead of expending the contours directly, we reconstruct the precise polygon of a text instance with Arbitrary Shape Representation Module. 3) TCL + TVO + TCO +TBO: As a substitute for connected component analysis, we use the method of point-to-quad assignment in Text Instance Segmentation Module, which is supposed to incorporate high-level object knowledge and low-level information, and assign each pixel on the TCL map to its best matching text instances. The efficiency of the proposed method is demonstrated on SCUT-CTW1500, as shown in Tab. 1. It surpasses the first two methods by 21.75% and 1.46% in Hmean, respectively. Meanwhile, the proposed point-to-quad assignment cost almost the same time as connected component analysis. The Trade-off between Speed and Accuracy. There is a tradeoff between speed and accuracy, and the mainstream segmentation methods maintain high resolution, which is usually in the same size of input image, to achieve a better result at a correspondingly high cost in time. Several experiments are made on SCUT-CTW1500 benchmark, We compare the performance with different resolution of output, i.e., {1, 1/2, 1/4, 1/8 }, and find a rational trade-off between speed and accuracy at the 1/4 scale of input images. The detail configuration and results are shown in Tab. 2. Note that the feature extractor in those experiments is not equipped with Context Attention Block. The Effectiveness of Context Attention Blocks. We introduce the CABs into the network architecture to capture long-range dependencies of pixel information. We conduct two experiments on SCUT-CTW1500 by replacing the CABs with several convolutional layers stacked together as the baseline experiment, which has almost the same number of trainable variables. The input size of image is 512 × 512, and the output of images is at 1/4 of input Evaluation on Curved Text Benchmark On SCUT-CTW1500 and Total-Text, we evaluate the performance of SAST for detecting text lines of arbitrary shapes. We fine-tune our model for about 10 epochs on SCUT-CTW1500 and Total-Text training set, respectively. In testing phase, the number of vertices of text polygons is adaptively counted and we set the scale of the longer side to 512 for single-scale testing on both datasets. The quantitative results are shown in Tab. 4 and Tab. 5. With the help of the efficient post-processing, SAST achieves 80.97% and 78.08% in Hmean on SCUT-CTW1500 and Total-Text, respectively, which is comparable to the state-of-the-art methods. In addition, multi-scale testing can further improves Hmean to 81.45% and 80.21% on SCUT-CTW1500 and Total-Text. The visualization of curved text detection are shown in Fig. 6 (a) and (b). As can be seen, the proposed text detector SAST can handle curved text lines well. Evaluation on ICDAR 2015 In order to verify the validity for detecting oriented text, we compare SAST with the state-of-the-art methods on ICDAR 2015 dataset, a standard oriented text dataset. Compared with previous arbitrarilyshaped text detectors [35,38,39], which detect text on the same size as input image, SAST achieves a better performance in a much faster speed. All the results are listed in Tab. 6. Specifically, for single-scale testing, SAST achieves 86.91% in Hmean, surpassing most competitors (these pure detection methods without the assistance of recognition task). Moreover, multi-scale testing increases about 0.53% in Hmean. Some detection results are shown in Fig. 6 (c), and indicate that SAST is also capable to detect multi-oriented text accurately. Evaluation on ICDAR2017-MLT To demonstrate the generalization ability of SAST on multilingual scene text detection, we evaluate SAST on ICDAR2017-MLT. Similar to the above training methods, the detector is fine-tuned for about 10 epochs on the SynthText pre-trained model. At the single scale testing, our proposed method achieves a Hmean of 68.76%, and it increases to 72.37% for multi-scale testing. The quantitative results are shown in Tab. 7. The visualizatio n of multilingual text detection is as illustrated in the Fig. 6 (d), which shows the robustness of the proposed method in detecting multilingual scene text. Runtime In this paper, we make a trade-off between speed and accuracy. The TCL, TVO, TCO, and TBO maps are predicted in the 1/4 size of input images. With the proposed post-processing step, SAST is supposed to detect text of arbitrary shapes in real-time speed with a commonly used GPU. To demonstrate the runtime of the proposed CONCLUSION AND FUTURE WORK In this paper, we propose an efficient single-shot arbitrarily-shaped text detector together with Context Attention Blocks and a mechanism of point-to-quad assignment, which integrates both high-level object knowledge and low-level pixel information to obtain text instances from a context-enhanced segmentation. Several experiments demonstrate that the proposed SAST is effective in detecting arbitrarily-shaped text, and is also robust in generalizing to multilingual scene text datasets. Qualitative results show that SAST helps to alleviate some common challenges in segmentation-based text detector, such as the problem of fragments and the separation of adjacent text instances. Moreover, with a commonly used GPU, SAST runs fast and may be sufficient for some real-time applications, e.g., augmented reality translation. However, it is difficult for SAST to detect some extreme cases, which mainly are very small text regions. In the future, we are interested in improving the ability of small text detection and developing an end-to-end text reading system for text of arbitrary shapes.
4,360
1908.05498
2967155990
Detecting scene text of arbitrary shapes has been a challenging task over the past years. In this paper, we propose a novel segmentation-based text detector, namely SAST, which employs a context attended multi-task learning framework based on a Fully Convolutional Network (FCN) to learn various geometric properties for the reconstruction of polygonal representation of text regions. Taking sequential characteristics of text into consideration, a Context Attention Block is introduced to capture long-range dependencies of pixel information to obtain a more reliable segmentation. In post-processing, a Point-to-Quad assignment method is proposed to cluster pixels into text instances by integrating both high-level object knowledge and low-level pixel information in a single shot. Moreover, the polygonal representation of arbitrarily-shaped text can be extracted with the proposed geometric properties much more effectively. Experiments on several benchmarks, including ICDAR2015, ICDAR2017-MLT, SCUT-CTW1500, and Total-Text, demonstrate that SAST achieves better or comparable performance in terms of accuracy. Furthermore, the proposed algorithm runs at 27.63 FPS on SCUT-CTW1500 with a Hmean of 81.0 on a single NVIDIA Titan Xp graphics card, surpassing most of the existing segmentation-based methods.
Instance segmentation is a challenging task, which involves both segmentation and classification tasks. The most recent and successful two-stage representative is Mask R-CNN @cite_0 , which achieves amazing results on public benchmarks, but requires relatively long execution time due to the per-proposal computation and its deep stem network. Other frameworks rely mostly on pixel-features generated by a single FCN forward pass, and employ post-processing like graphical models, template matching, or pixel embedding to cluster pixels belonging to the same instance. More specifically, Non-local Networks @cite_36 utilizes a self-attention @cite_18 mechanism to enable a pixel-feature to perceive features from all the other positions, while the CCNet @cite_37 harvests the contextual information from all pixels more efficiently by stacking two criss-cross attention modules, which augments the feature representation a lot. In post-processing step, @cite_55 present a pixel affinity scheme and cluster pixels into instances with a simple yet effective graph merge algorithm. Instance-Cut @cite_19 and the work of @cite_28 predict object boundaries intentionally to facilitate the separation of object instances.
{ "abstract": [ "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.", "Long-range dependencies can capture useful contextual information to benefit visual understanding problems. In this work, we propose a Criss-Cross Network (CCNet) for obtaining such important information through a more effective and efficient way. Concretely, for each pixel, our CCNet can harvest the contextual information of its surrounding pixels on the criss-cross path through a novel criss-cross attention module. By taking a further recurrent operation, each pixel can finally capture the long-range dependencies from all pixels. Overall, our CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the recurrent criss-cross attention module requires @math less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85 of the non-local block in computing long-range dependencies. 3) The state-of-the-art performance. We conduct extensive experiments on popular semantic segmentation benchmarks including Cityscapes, ADE20K, and instance segmentation benchmark COCO. In particular, our CCNet achieves the mIoU score of 81.4 and 45.22 on Cityscapes test set and ADE20K validation set, respectively, which are the new state-of-the-art results. We make the code publicly available at this https URL .", "", "Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2 mean IOU on PASCAL VOC 2012 and 80.3 mean IOU on Cityscapes dataset.", "We present an instance segmentation scheme based on pixel affinity information, which is the relationship of two pixels belonging to the same instance. In our scheme, we use two neural networks with similar structures. One predicts the pixel level semantic score and the other is designed to derive pixel affinities. Regarding pixels as the vertexes and affinities as edges, we then propose a simple yet effective graph merge algorithm to cluster pixels into instances. Experiments show that our scheme generates fine grained instance masks. With Cityscape training data, the proposed scheme achieves 27.3 AP on test set.", "In this paper, we present a new Mask R-CNN based text detection approach which can robustly detect multi-oriented and curved text from natural scene images in a unified manner. To enhance the feature representation ability of Mask R-CNN for text detection tasks, we propose to use the Pyramid Attention Network (PAN) as a new backbone network of Mask R-CNN. Experiments demonstrate that PAN can suppress false alarms caused by text-like backgrounds more effectively. Our proposed approach has achieved superior performance on both multi-oriented (ICDAR-2015, ICDAR-2017 MLT) and curved (SCUT-CTW1500) text detection benchmark tasks by only using single-scale and single-model testing.", "This work addresses the task of instance-aware semantic segmentation. Our key motivation is to design a simple method with a new modelling-paradigm, which therefore has a different trade-off between advantages and disadvantages compared to known approaches. Our approach, we term InstanceCut, represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard convolutional neural network for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. We evaluate our approach on the challenging CityScapes dataset. Despite the conceptual simplicity of our approach, we achieve the best result among all published methods, and perform particularly well for rare object classes." ], "cite_N": [ "@cite_18", "@cite_37", "@cite_36", "@cite_28", "@cite_55", "@cite_0", "@cite_19" ], "mid": [ "2963403868", "2902930830", "", "2799166040", "2895065325", "2962781062", "2558156561" ] }
A Single-Shot Arbitrarily-Shaped Text Detector based on Context Attended Multi-Task Learning ACM Reference Format
Recently, scene text reading has attracted extensive attention in both academia and industry for its numerous applications, such as scene understanding, image and video retrieval, and robot navigation. As the prerequisite in textual information extraction and understanding, text detection is of great importance. Thanks to the surge of deep neural networks, various convolutional neural network (CNN) based methods have been proposed to detect scene text, continuously refreshing the performance records on standard benchmarks [1,15,28,42]. However, text detection in the wild is still a challenging task due to the significant variations in size, aspect ratios, orientations, languages, arbitrary shapes, and even the complex background. In this paper, we seek an effective and efficient detector for text of arbitrary shapes. To detect arbitrarily-shaped text, especially those in curved form, some segmentation-based approaches [23,35,37,44] formulated text detection as a semantic segmentation problem. They employ a fully convolutional network (FCN) [27] to predict text regions, and apply several post-processing steps such as connected component analysis to extract final geometric representation of scene text. Due to the lack of global context information, there are two common challenges for segmentation-based text detectors, as demonstrated in Fig. 1, including: (1) Lying close to each other, text instances are difficult to be separated via semantic segmentation; (2) Long text instances tend to be fragmented easily, especially when character spacing is far or the background is complex, such as the effect of strong illumination. In addition, most segmentation-based detectors have to output large-resolution prediction to precisely describe text contours, thus suffer from time-consuming and redundant postprocessing steps. Some instance segmentation methods [4,7,45] attempt to embed high-level object knowledge or non-local information into the network to alleviate the similar problems described above. Among them, Mask-RCNN [7], a proposal-based segmentation method that cascades detection task (i.e., RPN [29]) and segmentation task by RoIAlign [7], has achieved better performance than those proposal-free methods by a large margin. Recently, some similar ideas [14,24,39] have been introduced to settle the problem of detecting text of arbitrary shapes. However, they are all facing a common challenge that it takes much more time when the number of valid text proposals increases, due to the large number of overlapping computations in segmentation, especially in the case that valid proposals are dense. In contrast, our approach is based on a single-shot view and efficient multi-task mechanism. Inspired by recent works [16,22,33] in general semantic instance segmentation, we aim to design a segmentation-based Single-shot Arbitrarily-Shaped Text detector (SAST), which integrates both the high-level object knowledge and low-level pixel information in a single shot and detects scene text of arbitrary shapes with high accuracy and efficiency. Employing a FCN [27] model, various geometric properties of text regions, including text center line (TCL), text border offset (TBO), text center offset (TCO), and text vertex offset (TVO), are designed to learn simultaneously under a multi-task learning formulation. In addition to skip connections, a Context Attention Block (CAB) is introduced into the architecture to aggregate contextual information for feature augmentation. To address the problems illustrated in Fig. 1, we propose a point-toquad method for text instance segmentation, which assigns labels to pixels by combining high-level object knowledge from TVO and TCO maps. After clustering TCL map into text instances, more precise polygonal representations of arbitrarily-shaped text are then reconstructed based on TBO maps. Experiments on public datasets demonstrate that the proposed method achieves better or comparable performance in terms of both accuracy and efficiency. The contribution of this paper are three-fold: • We propose a single-shot text detector based on multi-task learning for text of arbitrary shapes including multi-oriented, multilingual, and curved scene text, which is efficient enough for some real-time applications. • The Context Attention Block aggregates the contextual information to augment the feature representation without too much extra calculation cost. • The point-to-quad assignment is robust and effective to separate text instance and alleviate the problem of fragments, which is better than connected component analysis. METHODOLOGY In this section, we will describe our SAST framework for detecting scene text of arbitrary shapes in details. Arbitrary Shape Representation The bounding boxes, rotated rectangles, and quadrilaterals are used as classical representations in most detection-based text detectors, which fails to precisely describe the text instances of arbitrary shapes, as shown in Fig. 1 (b). The segmentation-based methods formulate the detection of arbitrarily-shaped text as a binary segmentation problem. Most of them directly extracted the contours of instance mask as the representation of text, which is easily affected by the completeness and consistency of segmentation. However, PSENet [35] and TextSnake [23] attempted to progressively reconstruct the polygonal representation of detected text based on a shrunk text region, of which the post-processing is complex and tended to be slow. Inspired by those efforts, we aim to design an effective method for arbitrarily-shaped text representation. In this paper, we extract the center line of text region (TCL map) and reconstruct the precise shape representation of text instances with a regressed geometry property, i.e. TBO, which indicates the offset between each pixel in TCL map and corresponding point pair in upper and lower edge of its text region. More specifically, as depicted in Fig. 2, the representation strategy consists of two steps: text center point sampling and border point extraction. Firstly, we sample n points at equidistance intervals from left to right on the center line region of text instance. By taking a further operation, we can determine the corresponding border point pairs based on the sampled center line point with the information provided by TBO maps in the same location. By linking all the border points clockwise, we can obtain a complete text polygon representation. Instead of setting n to a fixed number, we assign it by the ratio of center line length to the average of length of border offset pairs adaptively. Several experiments on curved text datasets prove that our method is efficient and flexible for arbitrarily-shaped text instances. Pipeline The network architecture of FCN-based text detectors are limited to the local receptive fields and short-range contextual information, and makes it struggling to segment some challenging text instances. Thus, we design a Context Attention Block to integrate the longrange dependencies of pixels to obtain a more representative feature. As a substitute for the Connected Component Analysis, we also propose Point-to-Quad Assignment to cluster the pixels in TCL map into text instances, where we use TCL and TVO maps to restore the minimum quadrilateral bounding boxes of text instances as high-level information. An overview of our framework is depicted in Fig. 3. It consists of three parts, including a stem network, multi-task branches, and a post-processing part. The stem network is based on ResNet-50 [8] with FPN [19] and CABs to produce context-enhanced representation. The TCL, TCO, TVO, and TBO maps are predicted for each text region as a multi-task problem. In the post-processing, we segment text instances by point-to-quad assignment. Concretely, similar to EAST [47], the TVO map regresses the four vertices of bounding quadrangle of text region directly, and the detection results is considered as high-level object knowledge. For each pixel in TCL map, a corresponding offset vector from TCO map will point to a low-level center which the pixel belongs to. Computing the distance between lower-level center and high-level object centers of the detected bounding quadrangle, pixels in the TCL map will be grouped into several text instances. In contrast to the connected component analysis, it takes high-level object knowledge into account, and is proved to be more efficient. More details about the mechanism of point-to-quad assignment will be discussed in this Section 3.4. We sample a adaptive number of points in the center line of each text instance, calculate corresponding points in upper and lower borders with the help of TBO map, and reconstruct the representation of arbitrarily-shaped scene text finally. Shape Representation Network Architecture In this paper, we employ ResNet-50 as the backbone network with the additional fully-connected layers removed. With different levels of feature map from the stem network gradually merged three-times in the FPN manner, a fused feature map X is produced at 1/4 size of the input images. We serially stack two CABs behind to capture rich contextual information. Adding four branches behind the contextenhanced feature maps X ′′ , the TCL and other geometric maps are predicted in parallel, where we adopt a 1 × 1 convolution layer with the number of output channel set to {1, 2, 8, 4} for TCL, TCO, TVO, and TBO map respectively. It is worth mentioning that all the output channels of convolution layers in the FPN is set to 128 directly, regardless of whether the kernel size is 1 or 3. Context Attention Block. The segmentation results of FCN and the post-processing steps depend mainly on local information. The proposed CAB utilizes a self-attention mechanism [34] to aggregate the contextual information to augment the feature representation, of which the details is demonstrated in Fig. 4. In order to alleviate the huge computational overhead caused by direct use of self-attention, CAB only considers the similarity between each location in feature map and others in the same horizontal or vertical column. The feature map X is the output of ResNet-50 backbone which is in size of N × H × W × C. To collect contextual information horizontally, we adopt three convolution layers behind X in parallel to get { f θ , f ϕ , f д } and reshape them into {N × H } × W × C, then multiply f ϕ by the transpose of f θ to get an attention map of size {N × H } × W × W , which is activated by a Sigmoid function. A horizontal contextual information enhanced feature, which is resized to N × H × W × C finally, is integrated by multiplication of f д and the attention map. It is slightly different to get the vertical contextual information that { f θ ,f ϕ ,f д } is transposed to {N × H } × C ×W at the beginning, as shown in cyan boxes in Fig. 4. Meanwhile, a short-cut path is used to preserve local features. Concatenating the horizontal contextual map, vertical contextual map, and short-cut map together and reducing channel number of X ′ with a 1 × 1 convolutional layer, the CAB aggregates long-range pixel-wise contextual information in both horizontal and vertical directions. Besides, the convolutional layers denoted by purple and cyan boxes share weights. By serially connecting two CABs, each pixel can finally capture long-range dependencies from all pixels, as depicted in the bottom of Fig. 4, leading to a more powerful context-enhanced feature map X ′′ , which also helps alleviate the problems caused by the limited receptive field when dealing with more challenging text instances, such as long text. Text Instance Segmentation For most proposal free text detector of arbitrary shapes, the morphological post-processing such as connected component analysis are adopted to achieve text instance segmentation, which do not explicitly incorporate high-level object knowledge and easily fail to detect complex scene text. In this section, we describe how to generate an text instance semantic segmentation with TCL, TCO and TVO maps with high-level object information. Point-to-Quad Assignment. As depicted in Fig. 3, the first step of text instance segmentation is detecting candidate text quadrangles based on TCL and TVO maps. Similar to EAST [47], we binarize Figure 4: Context Attention Blocks: a single CAB module aggregates pixel-wise contextual information both horizontally and vertically, and long-range dependencies from all pixels can be captured by serially connecting two CABs. CAB CAB X ′ ′′ T, Conv 1x1, R Conv 1x1, R Multiplication Concatenate Conv 1x1 {NxH} x W x C Short Cut, N x H x W x C T, {NxH} x C x W {NxH} x W x W R, N x H x W x C R, N x W x H x C {NxW} x H x H T, {NxW} x C x H X ′ {NxW} x H x C N x H x W x C R for "Reshape" T for "Transpose" N x H x W x C CAB the TCL map, whose pixel values are in the range of [0, 1], with a given threshold, and restore the corresponding quadrangle bounding boxes with the four vertex offsets provided by TVO map. Of course, NMS is adopted to suppress overlapping candidates. The final quadrangle candidates shown in Fig. 3 (b) can be considered to depend on high-level knowledge. The second and last step in text instance segmentation is clustering the responses of text region in the binarized TCL map into text instances. As Fig. 3 (c) shows, the TCO map is a pixel-wise prediction of offset vectors pointing to the center of bounding boxes which the pixels in the TCL map should belong to. With a strong assumption that pixels in TCL map belonging to the same text instance should point to the same object-level center, we cluster TCL map into several text instances by assigning the response pixel to the quadrangle boxes generated in the first step. Moreover, we do not care about whether predicted boxes in the first step are fully bounding text region in the input image, and the pixels outside of the predicted box will be mostly assigned to the corresponding text instances. Integrated with high-level object knowledge and low-level pixel information, the proposed post-processing clusters each pixel in TCL map to its best matching text instance efficiently, and can help to not only separate text instances that are close to each other, but also alleviate fragments when dealing with extremely long text. Label Generation and Training Objectives In this part, the generation of TCL, TCO, TVO, and TBO maps will be discussed. TCL is the shrunk version of text region, and it is an one channel segmentation map for text/non-text. The other label maps such as TCO, TVO, and TBO, are per-pixel offsets with reference to those pixels in TCL map. For each text instance, we calculate the center and four vertices of the minimum enclosing quadrangle from its annotation polygon, as depicted in Fig. 5 (c) (d). TCO map is the offset between pixels in TCL map and the Here are more details about the generation of a TBO map. For a quadrangle text annotation with vertices {V 1 , V 2 , V 3 , V 4 } in clockwise and V 1 is the top left vertex, as shown in Fig. 5 (b), the generation of TBO mainly contains two steps: first, we find a corresponding point pair on the top and bottom boundaries for each point in TCL map, than calculate the corresponding offset pair. With average slope of the upper side and lower side of quadrangle, the line cross a point P 0 in TCL map can be determined. And it is easy to directly calculate the intersection points {P 1 , P 2 } of the line in the left and right edges of bounding quadrangle with algebraic methods. A pair of corresponding points {P upper , P lower } for P 0 can be determined from: P 0 − P 1 P 2 − P 1 = P upper − V 1 V 2 − V 1 = P lower − V 4 V 3 − V 4 . In the second step, the offsets between P 0 and {P upper , P lower } can be easily determined. Polygons of more than four vertices are treated as a series of quadrangles connected together, and TBO of polygons can be generated gradually from quadrangles as described before. For non-TCL pixels, their corresponding geometry attributes are set to 0 for convenience. At the stage of training, the whole network is trained in an end-to-end manner, and the loss of the model can be formulated as: L tot al = λ 1 L tcl + λ 2 L tco + λ 3 L tvo + λ 4 L tbo , where L tcl , L tco ,L tvo and L tbo represent the loss of TCL, TCO, TVO, and TBO maps, and the first one is the binary segmentation loss while the other are regression loss. We train segmentation branch by minimizing the Dice loss [27], and the Smooth L 1 loss [5] is adopted for regression loss. The loss weights λ 1 , λ 2 , λ 3 , and λ 4 are a tradeoff between four tasks which are equally important in this work, so we determine a set of values {1.0, 0.5, 0.5, 1.0} by making the four loss gradient norms close in back-propagation. To compare the effectiveness of SAST with existing methods, we perform thorough experiments on four public text detection datasets, i.e., ICDAR 2015, ICDAR2017-MLT, SCUT-CTW1500 and Total-Text. Datasets The datasets used for the experiments in this paper are briefly introduced below. SynthText. The SynthText dataset [6] is composed of 800,000 natural images, on which text in random colors, fonts, scales, and orientations is rendered carefully to have a realistic look. We use the dataset with word-level labels to pre-train our model. ICDAR 2015. The ICDAR 2015 dataset [15] is collected for the ICDAR 2015 Robust Reading Competition, with 1,000 natural images for training and 500 for testing. The images are acquired using Google Glass and the text accidentally appear in the scene. All the text instances annotated with word-level quadrangles. ICDAR2017-MLT. The ICDAR2017-MLT [28] is a large scale multi-lingual text dataset, which includes 7,200 training images, 1,800 validation images and 9,000 test images. The dataset consists of multi-oriented and multi-lingual aspects of scene text. The text regions in ICDAR2017-MLT are also annotated by quadrangles. SCUT-CTW1500. The SCUT-CTW1500 [42] is a challenging dataset for curved text detection. It consists of 1,000 training images and 500 test images, and text instances are largely in English and Chinese. Different from traditional datasets, the text instances in SCUT-CTW1500 are labelled by polygons with 14 vertices. Total-Text. The Total-Text [1] is another curved text benchmark, which consists of 1,255 training images and 300 testing images with more than 3 different text orientations: Horizontal, Multi-Oriented, and Curved. The annotations are labelled in word-level. Evaluation Metrics. The performance on ICDAR2015, Total-Text, SCUT-CTW1500, and ICDAR2017-MLT is evaluated using the protocols provided in [1,15,28,42], respectively. Implementation Details Training. ResNet-50 is used as the network backbone with pretrained weight on ImageNet [3]. The skip-connection is in FPN fashion with output channel numbers of the convolutional layers set to 128 and the final output is at 1/4 size of input images. All upsample operators are the bilinear interpolation and the classification branch is activated with sigmoid while the regression branches, i.e. TCO, TVO, and TBO maps, is the output of the last convolution layer directly. The training process is divided into two steps, i.e., the warming-up and fine-tuning steps. In the warming-up step, we apply Adam optimizer to train our model with learning rate 1e-4, and the learning rate decay factor is 0.94 on the SynthText dataset. In the fine-tuning step, the learning rate is re-initiated to 1e-4 and the model is tuned on ICDAR 2015, ICDAR2017-MLT, SCUT-CTW1500 and Total-Text. All the experiments are performed on a workstation with the following configuration, CPU: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz x16; GPU: NVIDIA TITAN Xp ×4; RAM: 64GB. During the training time, we set the batch size to 8 per GPU in parallel. Data Augmentation. We randomly crop the text image regions, then resize and pad them to 512 × 512. Specially for curved polygon labeled datasets, we crop images without crossing text instances to avoid the destruction of polygon annotations. The cropped image regions will be rotated randomly in 4 directions (0 • , 90 • , 180 • , and 270 • ) and standardized by subtracting the RGB mean value of ImageNet dataset. The text region, which is marked as "DO NOT CARE" or its minimum length of edges is less than 8 pixels, will be ignored in the training process. Testing. In inference phase, unless otherwise stated, we set the longer side to 1536 for single-scale testing, and to 512, 768, 1536, and 2048 for multi-scale testing, while keeping the aspect ratio unchanged. A specified range is assigned to each testing scale and detections from different scales are combined using NMS, which is inspired by SNIP [31]. Ablation Study We conduct several ablation experiments to analyze SAST. The details are discussed as follows. The Effectiveness of TBO, TCO and TVO. To verify the efficiency of Text Instance Segmentation Module (TVO and TCO maps) and Arbitrary Shape Representation Module (TBO map) in SAST, we conduct several experiments with the following configurations: 1) TCL + CC + Expand: It is a naive way to predict center region of text, use connected component analysis to achieve text instance segmentation, and expand the contours of connected components by a shrinking rate as the final text geometric representation. 2) TCL + CC + TBO: Instead of expending the contours directly, we reconstruct the precise polygon of a text instance with Arbitrary Shape Representation Module. 3) TCL + TVO + TCO +TBO: As a substitute for connected component analysis, we use the method of point-to-quad assignment in Text Instance Segmentation Module, which is supposed to incorporate high-level object knowledge and low-level information, and assign each pixel on the TCL map to its best matching text instances. The efficiency of the proposed method is demonstrated on SCUT-CTW1500, as shown in Tab. 1. It surpasses the first two methods by 21.75% and 1.46% in Hmean, respectively. Meanwhile, the proposed point-to-quad assignment cost almost the same time as connected component analysis. The Trade-off between Speed and Accuracy. There is a tradeoff between speed and accuracy, and the mainstream segmentation methods maintain high resolution, which is usually in the same size of input image, to achieve a better result at a correspondingly high cost in time. Several experiments are made on SCUT-CTW1500 benchmark, We compare the performance with different resolution of output, i.e., {1, 1/2, 1/4, 1/8 }, and find a rational trade-off between speed and accuracy at the 1/4 scale of input images. The detail configuration and results are shown in Tab. 2. Note that the feature extractor in those experiments is not equipped with Context Attention Block. The Effectiveness of Context Attention Blocks. We introduce the CABs into the network architecture to capture long-range dependencies of pixel information. We conduct two experiments on SCUT-CTW1500 by replacing the CABs with several convolutional layers stacked together as the baseline experiment, which has almost the same number of trainable variables. The input size of image is 512 × 512, and the output of images is at 1/4 of input Evaluation on Curved Text Benchmark On SCUT-CTW1500 and Total-Text, we evaluate the performance of SAST for detecting text lines of arbitrary shapes. We fine-tune our model for about 10 epochs on SCUT-CTW1500 and Total-Text training set, respectively. In testing phase, the number of vertices of text polygons is adaptively counted and we set the scale of the longer side to 512 for single-scale testing on both datasets. The quantitative results are shown in Tab. 4 and Tab. 5. With the help of the efficient post-processing, SAST achieves 80.97% and 78.08% in Hmean on SCUT-CTW1500 and Total-Text, respectively, which is comparable to the state-of-the-art methods. In addition, multi-scale testing can further improves Hmean to 81.45% and 80.21% on SCUT-CTW1500 and Total-Text. The visualization of curved text detection are shown in Fig. 6 (a) and (b). As can be seen, the proposed text detector SAST can handle curved text lines well. Evaluation on ICDAR 2015 In order to verify the validity for detecting oriented text, we compare SAST with the state-of-the-art methods on ICDAR 2015 dataset, a standard oriented text dataset. Compared with previous arbitrarilyshaped text detectors [35,38,39], which detect text on the same size as input image, SAST achieves a better performance in a much faster speed. All the results are listed in Tab. 6. Specifically, for single-scale testing, SAST achieves 86.91% in Hmean, surpassing most competitors (these pure detection methods without the assistance of recognition task). Moreover, multi-scale testing increases about 0.53% in Hmean. Some detection results are shown in Fig. 6 (c), and indicate that SAST is also capable to detect multi-oriented text accurately. Evaluation on ICDAR2017-MLT To demonstrate the generalization ability of SAST on multilingual scene text detection, we evaluate SAST on ICDAR2017-MLT. Similar to the above training methods, the detector is fine-tuned for about 10 epochs on the SynthText pre-trained model. At the single scale testing, our proposed method achieves a Hmean of 68.76%, and it increases to 72.37% for multi-scale testing. The quantitative results are shown in Tab. 7. The visualizatio n of multilingual text detection is as illustrated in the Fig. 6 (d), which shows the robustness of the proposed method in detecting multilingual scene text. Runtime In this paper, we make a trade-off between speed and accuracy. The TCL, TVO, TCO, and TBO maps are predicted in the 1/4 size of input images. With the proposed post-processing step, SAST is supposed to detect text of arbitrary shapes in real-time speed with a commonly used GPU. To demonstrate the runtime of the proposed CONCLUSION AND FUTURE WORK In this paper, we propose an efficient single-shot arbitrarily-shaped text detector together with Context Attention Blocks and a mechanism of point-to-quad assignment, which integrates both high-level object knowledge and low-level pixel information to obtain text instances from a context-enhanced segmentation. Several experiments demonstrate that the proposed SAST is effective in detecting arbitrarily-shaped text, and is also robust in generalizing to multilingual scene text datasets. Qualitative results show that SAST helps to alleviate some common challenges in segmentation-based text detector, such as the problem of fragments and the separation of adjacent text instances. Moreover, with a commonly used GPU, SAST runs fast and may be sufficient for some real-time applications, e.g., augmented reality translation. However, it is difficult for SAST to detect some extreme cases, which mainly are very small text regions. In the future, we are interested in improving the ability of small text detection and developing an end-to-end text reading system for text of arbitrary shapes.
4,360
1908.04867
2968869838
Abstract We introduce a game-theoretic model to investigate the strategic interaction between a cyber insurance policyholder whose premium depends on her self-reported security level and an insurer with the power to audit the security level upon receiving an indemnity claim. Audits can reveal fraudulent (or simply careless) policyholders not following reported security procedures, in which case the insurer can refuse to indemnify the policyholder. However, the insurer has to bear an audit cost even when the policyholders have followed the prescribed security procedures. As audits can be expensive, a key problem insurers face is to devise an auditing strategy to deter policyholders from misrepresenting their security levels to gain a premium discount. This decision-making problem was motivated by conducting interviews with underwriters and reviewing regulatory filings in the US; we discovered that premiums are determined by security posture, yet this is often self-reported and insurers are concerned by whether security procedures are practised as reported by the policyholders. To address this problem, we model this interaction as a Bayesian game of incomplete information and devise optimal auditing strategies for the insurers considering the possibility that the policyholder may misrepresent her security level. To the best of our knowledge, this work is the first theoretical consideration of post-incident claims management in cyber security. Our model captures the trade-off between the incentive to exaggerate security posture during the application process and the possibility of punishment for non-compliance with reported security policies. Simulations demonstrate that common sense techniques are not as efficient at providing effective cyber insurance audit decisions as the ones computed using game theory.
This paper continues the trend towards rectifying the substantial discrepancy'' @cite_19 between early cyber insurance models and informal claims about the insurance market. Early research considered factors relevant to the viability of a market. Interdependent security occurs when the risk depends on the actions of others'' @cite_17 @cite_29 . Optimists argued that insurers could coordinate the resulting collective action problem @cite_1 @cite_23 , leading to a net social welfare gain and a viable market. Skeptics instead focused on the high correlation in failure of information systems'' @cite_18 @cite_25 @cite_5 , citing it as a major impediment to the supply of cyber insurance. Recent empirical work @cite_28 analyzing 180 cyber insurance filings shows that the cyber insurance market is viable.
{ "abstract": [ "High correlation in failure of information systems due to worms and viruses has been cited as major impediment to cyber-insurance. However, of the many cyber-risk classes that influence failure of information systems, not all exhibit similar correlation properties. In this paper, we introduce a new classification of correlation properties of cyber-risks based on a twin-tier approach. At the first tier, is the correlation of cyber-risks within a firm i.e. correlated failure of multiple systems on its internal network. At second tier, is the correlation in risk at a global level i.e. correlation across independent firms in an insurer’s portfolio. Various classes of cyber-risks exhibit dierent level of correlation at two tiers, for instance, insider attacks exhibit high internal but low global correlation. While internal risk correlation within a firm influences its decision to seek insurance, the global correlation influences insurers’ decision in setting the premium. Citing real data we study the combined dynamics of the two-step risk arrival process to determine conditions conducive to the existence of cyber-insurance market. We address technical, managerial and policy choices influencing the correlation at both steps and the business implications thereof.", "", "Risks faced by information system operators and users are not only determined by their own security posture, but are also heavily affected by the security-related decisions of others. This interdependence between information system operators and users is a fundamental property that shapes the efficiency of security defense solutions. Game theory is the most appropriate method to model the strategic interactions between these participants. In this survey, we summarize game-theoretic interdependence models, characterize the emerging security inefficiencies, and present mechanisms to improve the security decisions of the participants. We focus our attention on games with interdependent defenders and do not discuss two-player attacker-defender games. Our goal is to distill the main insights from the state of the art and to identify the areas that need more attention from the research community.", "", "We propose a comprehensive formal framework to classify all market models of cyber-insurance we are aware of. The framework features a common terminology and deals with the specific properties of cyber-risk in a unified way: interdependent security, correlated risk, and information asymmetries. A survey of existing models, tabulated according to our framework, reveals a discrepancy between informal arguments in favor of cyber-insurance as a tool to align incentives for better network security, and analytical results questioning the viability of a market for cyber-insurance. Using our framework, we show which parameters should be considered and endogenized in future models to close this gap.", "Managing security risks in the Internet has so far mostly involved methods to reduce the risks and the severity of the damages. Those methods (such as firewalls, intrusion detection and prevention, etc) reduce but do not eliminate risk, and the question remains on how to handle the residual risk. In this paper, we take a new approach to the problem of Internet security and advocate managing this residual risk by buying insurance against it. Using insurance in the Internet raises several questions because entities in the Internet face correlated risks, which means that insurance claims will likely be correlated, making those entities less attractive to insurance companies. Furthermore, risks are interdependent, meaning that the decision by an entity to invest in security and self-protect affects the risk faced by others. We analyze the impact of these externalities on the security investments of users using a simple 2-agent model. Our key results are that there are sound economic reasons for agents to not invest much in self-protection, and that insurance is a desirable incentive mechanism which pushes agents over a threshold into a desirable state where they all invest in self-protection. In other words, insurance increases the level of self-protection, and therefore the level of security, in the Internet. Therefore, we believe that insurance should become an important component of risk management in the Internet.", "Social, technical and business connections can all give rise to security risks. These risks can be substantial when individual compromises occur in combinations, and difficult to predict when some connections are not easily observed. A significant and relevant challenge is to predict these risks using only locally-derivable information.", "Cyberinsurance to cover losses and liabilities from network or information security breaches can provide incentives for security investments that reduce risk. Although cyberinsurance has evolved, industry has been slow to adopt it as a risk management tool.", "" ], "cite_N": [ "@cite_18", "@cite_28", "@cite_29", "@cite_1", "@cite_19", "@cite_23", "@cite_5", "@cite_25", "@cite_17" ], "mid": [ "2144173251", "2593948499", "2017809722", "106782957", "2126949246", "2099263883", "53401952", "2156983464", "" ] }
Post-Incident Audits on Cyber Insurance Discounts
No amount of investment in security eliminates the risk of loss [1]. Driven by the frequency of cyber attacks, riskaverse organizations increasingly transfer residual risk by purchasing cyber insurance. As a result, the cyber-insurance market is predicted to grow to between $7.5 and $20 billion by 2020, as identified in [2]. Similar to other types of insurance, cyber-insurance providers pool the risk from multiple policyholders together and charge a premium to cover the underlying risk. Yet cyber risks like data breaches are qualitatively different from traditional lines like property insurance. For instance, buildings are built once according to building regulations, whereas computer systems continually change as mobile devices blur the network perimeter and software evolves with additional product features and security patches. Adversaries * corresponding author Email addresses: s.panda@surrey.ac.uk (Sakshyam Panda), daniel.woods@cs.ox.ac.uk (Daniel W Woods), alaszka@uh.edu (Aron Laszka), andrew.fielder@imperial.ac.uk (Andrew Fielder), e.panaousis@surrey.ac.uk (Emmanouil Panaousis) update strategies to exploit vulnerabilities emerging from technological flux. Further, the problems of moral hazard and adverse selection become more pressing. Adverse selection results from potential clients being more likely to seek insurance if they face a greater risk of loss. Meanwhile, information asymmetry limits insurers in assessing the applicant's risk. The risk depends on computer systems with many devices in different configurations, users with a range of goals, and idiosyncratic organizational security teams, policies, and employed controls. Collecting information is a costly procedure, let alone assessing and quantifying the corresponding risk. Moral hazard occurs when insureds engage in riskier behaviour in the knowledge that the insurer will indemnify any losses. Even if initial assessment reveals that security policies are in place, it is no guarantee that they will be followed given that "a significant number of security breaches result from employees' failure to comply with security policies" [3]. Technological compliance suffers too, as evidenced by the Equifax breach resulting from not patching a publicly known vulnerability [4]. Insurance companies collect risk information about applicants to address adverse selection. We interviewed 9 underwriters in the UK and found that 8 of them use selfreported application forms; 7 of them use telephone calls with the applicant; 3 of them use external audits; and only one uses on-site audits 1 . This suggests that the application process relies on accurate self-reporting of risk factors. Cyber insurance application forms collect information about questions ranging from generic business topics to questions related to information security controls [5]. Romanosky et al. [6] introduced a data set resulting from a US law requiring insurers to file documents describing their pricing schemes. Pricing algorithms depended on the applicant's industry, revenue, past-claims history, and-most relevant to this paper-the security controls employed by the organization. The insurer collects all this information and sets a price according to the formulas described in [6], reducing the premium when security controls are reported to be in place. This was corroborated by interviews with insurance professionals in Sweden [7]. Surprisingly, individual underwriters determine the size of the discount for security controls on a case-by-case basis, even though this can be as large as 25% of the premium. Moral hazard is generally addressed by including terms in the policy that insureds must follow for their coverage to be valid. An early study found that coverage was excluded for a "failure to take reasonable steps to maintain and upgrade security" [8]. A study from 2017 found few exclusions prescribing security procedures but the majority of policies contained exclusions for "dishonest acts" [6]. One such dishonest act is violating the principle of up-most good faith requiring insureds to not intentionally deceive the company offering the insurance [9]. This principle and the corresponding exclusion mitigates moral hazard, which might otherwise drive honest firms to de-prioritize compliance with security procedures. Further, it imposes a cost on fraudulent organizations claiming that entirely fictional security products are in place to receive a lower premium. For example, one insurer refused to pay out on a cyber policy because "security patches were no longer even available, much less implemented" [10] despite the application form reporting otherwise. We do not consider the legality of this case, but include it as evidence that insurers conduct audits to establish whether there are grounds for refusing coverage. Further, insurers offer discounts for insureds based on security posture and often rely on self-reports that security controls are in place. Interviewing insurers revealed concerns about whether security policies were being complied with in reality. Besides, larger premium discounts increase the incentive to misrepresent security levels potentially necessitating a higher frequency of investigation 1 Note that these are mutually inclusive events. which is uneconomical for insurers. To explore how often should insurers audit cyber insurance claims, we develop a game-theoretic model that takes into account relevant parameters from pricing data collected by analyzing 26 cyber insurance pricing schemes filed in California and identifies different optimal auditing strategies for insurers. Our analytical approach relies on Perfect Bayesian Equilibrium (PBE). We complement our analysis with simulation results with parameter values from the collected data. We further make "common sense" assumptions regarding auditing strategies and show that in general, insurers are better-off with the game-theoretic strategies. The results will be of interest to policymakers in the United States and the European Union, who believe cyber insurance can improve security levels by offering premium discounts [11]. The remainder of this paper is organized as follows. Section 2 identifies existing approaches to modeling the cyber insurance market. We introduce our game-theoretic model in Section 3 and present the analysis in Section 4. Section 5 details the our methodology for data collection which instantiate our simulation results. Finally, we end with concluding remarks in Section 6. Model We model the interaction between the policyholder P and insurer I as a one-shot dynamic game called the Cyber Insurance Audit Game (CIAG), which is represented in Figure 1. Each decision node of the tree represents a state where the player with the move has to choose an action. The leaf nodes present the payoffs of the players for the sequence of chosen actions. The payoffs are represented in the format x y , where x and y are the payoffs of P and I, respectively. Table 1 presents the list of symbols used in our model. Note that the initial wealth of the policyholder (W ) and the premium for insurance coverage (p) are omitted from the tree for ease of presentation. We assume that the policyholder does not make a decision regarding its security investment in our model, because that decision has been made before seeking insurance. Hence, a particular applicant has a certain fixed type (with respect to security), but the insurer does not know the type of an applicant due to information asymmetry. We can model the insurer's uncertainty by assuming that it encounters certain types of applicants with certain probabilities. The type of the policyholder is modeled as an outcome of a random event, that is, nature (N) decides the policyholder's type with respect to additional security investments, i.e., P S represents one with additional security investments and P N one without. Further, nature also decides whether a security incident occurs for each policyholder, represented as B (breach) and NB (no breach). The probability of an incident depends on the type of the policyholder. Nature moves first by randomly choosing the policyholder's type according to a known a priori distribution: P S with probability Pr(P S ) = ϕ and P N with probability Pr(P N ) = 1 − ϕ, ϕ ∈ [0, 1]. The type is private to a policyholder and the insurer knows only the probability distribution over the different types. Hence, the game is of incomplete information. Regardless of the types, the policyholder's actions are CD (claim premium discount) and NC (no discount claim). Nature then decides the occurrence of the breach on a policyholder, followed by the insurer's decision to audit (A) or not audit (NA) only in the event of a breach. We assume that in CIAG, an audit investigates the misrepresentation of the cyber security investment and the claim for receiving a premium discount. In particular, it investigates whether the policyholder had indeed invested in cyber security countermeasures before claiming this discount. Our model does not assume that there is a particular type of audit. Having described the players and actions, in the following we present the interaction between P and I. First, P has signed up for a cyber insurance contract by paying a pre- mium p. The type of P is decided by the nature based on an additional security investment. We assume that this investment equals c. This investment will decrease the probability of P being compromised from β to β * . N P S N I d − c −l − d − a A d − c −l − d NA B d − c −d NB CD N I −c −l − a A −c −l NA B −c 0 NB NC ϕ P N N I d − l −d − a A d −l − d NA B d −d NB CD N I 0 −l − a At the same time the investment will enable P to claim a premium discount d. We assume that I offers d without performing any audit since investigating at this point would mean that I would have to audit policyholders who might never file an indemnity claim, thereby incurring avoidable losses. We further assume that if P decides to claim a discount without making the security investment, she will still receive d but risks having a future claim denied after an audit. After an incident, where P suffers loss l, insurer I has to decide whether to conduct an audit (e.g. forensics) to investigate details of the incident including the security level of P at the time of breach. We assume that this audit costs a to the insurer. This audit will result in: Case 1: confirming that P has indeed invested in security as claimed, in which case I will pay the indemnity. We assume full coverage so the indemnity payment equals l. Case 2: discovering that P has misrepresented her security level, I refuses to pay the indemnity and P has to bear the incident cost l. We assume that this case falls within the contract period during which P is locked-in by the contract. We define misrepresentation as when P is fraudulent or simply careless in maintaining the prescribed security level in the insurance contract and reports a fabricated security level to get the premium discount. In Figure 1, some decision nodes of I are connected through dotted lines indicating that the I cannot distinguish between the connected nodes due to unknown P type. These sets of decision nodes define the information sets of the insurer. An information set is a set of one or more decision nodes of a player that determines the possible subse-quent moves of the player conditioning on what the player has observed so far in the game. The insurer also has two information sets, one where the breach has occurred to the policyholder who has claimed premium discount CD= {(CD|P S ), (CD|P N )} and the one where the breach has occurred to the policyholder who has not claimed premium discount NC= {(NC|P S ), (NC|P N )}. Each of the insurer's information sets has two separate nodes since the insurer does not know the real type of the policyholder when deciding on whether to audit or not. In outcome CD,B,A, the expected utility of the P S is U PS CD,B,A = U (W − p + d − c), where U is a utility function, which we assume to be monotonically increasing and concave, W is the policyholder's initial wealth, p is the premium paid to the insurer, d is the premium discount, and c is the cost of the security investment. We assume the utility function to be concave to model the risk aversion of policyholders as defined in [12]. Note that we assume that W > p > d and W − p + d > c, and both W and p are exogenous to our model. U PS CD,B,A = U PS CD,B,NA = U PS CD,NB = U (W − p + d − c) (1) U PS NC,B,A = U PS NC,B,NA = U PS NC,NB = U (W − p − c) (2) U PN CD,B,A = U (W − p + d − l) (3) U PN CD,B,NA = U PN CD,NB = U (W − p + d) (4) U PN NC,B,A = U PN NC,B,NA = U PN NC,NB = U (W − p)(5) We further assume that the policyholder's goal is to maximize her expected utility. The expected utility of the policyholder is influenced by the possibility of a breach and the insurer's probability to audit. In particular, the expected utility for P S will be the same regardless of the insurer's probability to audit and the breach probability due to indemnification. P N , however, will need to consider these probabilities. In the outcome P S ,CD,B,A the insurer's utility is U I PS,CD,B,A = p − l − d − a, where p is the premium, d is the premium discount offered, l is the loss claimed by the policyholder, and a is the audit cost. In other outcomes, the insurer's utility is as follows: U I PS,CD,B,A = p − l − d − a(6)U I PN,CD,B,A = p − d − a(11) Decision Analysis In this section, we analyze the equilibria of the proposed Cyber Insurance Audit Game (Figure 1), which is a dynamic Bayesian game with incomplete information. The analysis is conducted using the game-theoretic concept of Perfect Bayesian Equilibrium (PBE). This provides insights into the strategic behaviour of the policyholder P concerning discount claims and the insurer I's auditing decision. A PBE, in the context of our game, can be defined by Bayes requirements discussed in [30]: Requirement 1: The player at the time of play must have a belief about which node of the information set has been reached in the game. The beliefs must be calculated using Bayes' rule, whenever possible, ensuring that they are consistent throughout the analysis. Requirement 2: Given these beliefs, a player's strategy must be sequentially rational. A strategy profile is said to be sequentially rational if and only if the action taken by the player with the move is optimal against the strategies played by all other opponents given the player's belief at that information set. Requirement 3: The player must update her beliefs at the PBE to remove any implausible equilibria. These beliefs are determined by Bayes' rule and players' equilibrium strategies. In the event of a security breach, the insurer's decision to audit or not must be based on beliefs regarding the policyholder's types. More specifically, a belief is defined as a probability distribution over the nodes within the insurer's information set, conditioned that the information node has been reached. The insurer has two information sets subjected to whether the policyholder has claimed premium discount or not which are CD= {(CD|P S ), (CD|P N )} and NC= {(NC|P S ), (NC|P N )}. The insurer assigns a belief to each of these information sets. Let µ and λ be the insurer's beliefs where µ = Pr(P S |CD) λ = Pr(P S |NC) That is, for the first information set, the insurer believes with µ and 1 − µ that the premium discount claim is from P S and P N , respectively. Similarly, for the second information set, the insurer believes with λ that P S has not claimed premium discount and believes with 1 − λ that P N has not claimed premium discount. The first requirement of PBE dictates that Bayes' rule should be used to determine beliefs. Thus From the payoffs in Figure 1, it can be clearly seen that CD is always a preferred choice for P S . Whereas, the insurer always gets a better payoff for choosing NA against NC irrespective of policyholder's type. Having defined the necessary concepts, next, we identify the possible PBEs of the game for the following constraints l > a and l > d (15) l > a and l < d (16) l < a and l > d (17) l < a and l < d (18) where the PBEs are strategy profiles and beliefs that satisfies all the three requirements described earlier. Theorem 1. For ϕ > l−a l , l > a and l > d, CIAG has only one pure-strategy PBE (CD,CD),(NA,NA) , in which the policyholder claims premium discount regardless of her type while the insurer does not audit regardless of whether the policyholder claims or not a discount, with µ = ϕ and arbitrary λ ∈ [0, 1]. Proof. The existence of pure-strategy PBE can be verified by examining the strategy profile (CD,CD) and (NA,NA) with constraint in Equation (15). This represents the case where an incident has occurred on the policyholders who have claimed premium discount. a) Belief consistency: Due to information asymmetry and as only one of the insurer's information set is in the equilibrium path, she assigns Pr(CD|P S ) = 1 and Pr(CD|P N ) = 1. Thus, using Bayes' rule in Equation (13) gives µ = ϕ/(ϕ + 1 − ϕ) = ϕ On the other hand, applying Bayes' rule in Equation (14) to λ yields 0/0 which is an indeterminate result. This implies that if the equilibrium is actually played then the off-equilibrium information set NC should not be reached restricting an update to the insurer's belief with Bayes' rule. Due to an indeterminate result, the insurer specifies an arbitrary = ϕ(p − d − l − a) + (1 − ϕ)(p − d − a) = p − ϕl − d − a U NA = ϕ · U I PS,CD,B,NA + (1 − ϕ) · U I PN,CD,B,NA(19)= ϕ(p − d − l) + (1 − ϕ)(p − d − l) = p − d − l(20) The condition for A to be sequentially rational is U A > U NA which gives p − ϕl − d − a > p − d − l ϕ ≤ l − a l = ϕ *(21) Now considering the off-equilibrium information set NC, the insurer always gets a better payoff by choosing NA. Thus, NA is a dominant strategy of the insurer against the off-equilibrium information set NC. The insurer's belief λ remains arbitrary. c) Policyholder's sequentially rational condition given insurer's best response: Knowing the best responses of the insurer i.e. (A,NA) for ϕ ≤ ϕ * and (NA,NA) for ϕ > ϕ * against CD, we derive the best response of the policyholder. For insurer's strategy profile (NA,NA), P S gets a payoff U (W − p + d − c) by choosing CD. If she deviates to NC, she will get a payoff U (W −p−c) which is undesirable. Whereas, P N receives a payoff U (W − p + d) by choosing CD. If she deviates to NC will get a payoff U (W − p) which is also undesirable. Thus, (CD,CD) and (NA,NA) can be verified as a PBE given ϕ > ϕ * and µ = ϕ. Note that the PBE includes the updated beliefs of the insurer implicitly satisfying Requirement 3. From the PBE, we can see that if l > a, l > d and insurer's belief ϕ is greater than the threshold value ϕ * , not auditing a breach is optimal for the insurer and claiming premium discount is optimal for the policyholder regardless of her type. When the insurer's belief ϕ ≤ ϕ * there exist no pure-strategy PBE. As a result, both players will mix up their strategies. We discuss this mixed-strategy PBE below. Note that, in the following, we use the inner tuple (x, 1 − x) to indicate a mixed strategy where the player chooses the first action with probability x and the second action with probability 1 − x. Theorem 2. For ϕ ≤ l−a l , l > a, l > d, CIAG has only one mixed-strategy PBE, in which: • P S will always prefer CD, while P N randomizes between CD and NC with probability δ and 1 − δ, respectively; • the insurer randomizes between A and NA with probability θ and 1 − θ, respectively, against CD, and she always prefer NA against NC, with her beliefs about P S playing CD and NC being µ = ϕ ϕ+(1−ϕ)δ and λ = 0, respectively, where δ = a (1 − ϕ)l θ = U (W − p + d) − U (W − p) β · U (W − p + d) − U (W − p + d − l)(22) Proof. The existence of mixed-strategy PBE is outlined below. a) Belief consistency: Again we apply the Bayes' rule. By assuming that the policyholder sticks to the equilibrium strategy, the insurer can derive that Pr(CD|P S ) = 1, Pr(NC|P S ) = 0, Pr(P S ) = ϕ, Pr(P N ) = 1 − ϕ, Pr(CD|P N ) = δ and Pr(NC|P N ) = 1 − δ. Using Equations (13) and (14) we obtain µ = µ and λ = λ in Equation (2). b) Optimal responses given beliefs and opponent's strategy: Given these beliefs and the mixed strategy of the policyholder, an insurer's optimal strategy would maximize her payoff. The insurer can achieve this by randomizing her actions such that the expected payoffs is equal for all the actions of the policyholder. This is known as the indifference principle in game theory. Thus, the expected utility of P N for choosing CD is U PN CD =β · θ · U PN CD,B,A + (1 − θ) · U PN CD,B,NA + (1 − β) · U PN CD,NB =β · θ · U (W − p + d − l) + β · (1 − θ) · U (W − p + d) + (1 − β) · U (W − p + d) =β · θ · U (W − p + d − l) − U (W − p + d) + U (W − p + d)(23) and for choosing NC, where NA is a dominating strategy of the insurer, is U PN NC = β · U PN NC,B,NA + (1 − β) · U PN NC,NB = β · U (W − p) + (1 − β) · U (W − p) = U (W − p)(24) The indifference principle requires that U P N NC = U P N CD , which gives U (W − p) = β ·θ· U (W − p + d − l) − U (W − p + d) + U (W − p + d) θ = U (W − p + d) − U (W − p) β · U (W − p + d) − U (W − p + d − l) as in Equation (22). Similarly, the policyholder will also mix her strategy with an aim to make the insurer indifferent between choosing A and NA. Thus, U I A = ϕ · U I PS,CD,B,A + (1 − ϕ) · U I PNCD,B,A + U I PNNC,B,A = ϕ (1)(p − d − l − a) + (0)(p − l − a) + (1 − ϕ) δ(p − d − a) + (1 − δ)(p − l − a) = p − l − a − ϕd − δd + δl + ϕδd − ϕδl (25) U NA = ϕ · U I PS,CD,B,NA + (1 − ϕ) · U I PNCD,B,NA + U I PNNC,B,NA = ϕ (1)(p − d − l) + (0)(p − l) + (1 − ϕ) δ(p − d − l) + (1 − δ)(p − l) = p − l − ϕd − δd + ϕδd(26) and U A = U NA gives (22). We conceive all the possible PBEs for CIAG by exhaustively applying this methodology over all combinations of the players' strategy profiles for the four constraints described in Equations (15) to (18). Figure 2 presents the solution space of CIAG. It further shows how the equilibrium strategies of the players depends on the premium discount (d), audit cost (a), and loss (l). p − l − a − ϕd − δd + δl + ϕδd − ϕδl =p − l − ϕd − δd + ϕδd δ = a (1 − ϕ)l as in Equation Model Evaluation Our analysis in Section 4 provides a framework for insurers to determine optimal auditing strategy against policyholders who can misrepresent their security levels to avail premium discounts. This section illustrates the methodology used to obtain values for various parameters of our model and simulation results using these values to determine the best strategy for the insurer. Methodology and Data Collection A diverse set of data sources is needed to study the interaction between insurance pricing, the effectiveness of security controls, and the cost of auditing claims. To this end, we combine the following data sources: a US law requiring insurers to report pricing algorithms [6], analysis of a data set of over 12, 000 cyber events [2], a study of the cost and effectiveness of security controls [31], and a range of informal estimates regarding the cost of an information security audit. The model assumes that nature determines incidents according to a Bernoulli distribution with loss amount l and probability of loss β. Analysis of the data set of 12, 000 cyber incidents reveals data breach incidents occur with a median loss $170K and frequency of around 0.015 for information firms [2], which we use as l and β respectively. We adopt the security control model used in [31]. Both fixed and operational costs are estimated using industry reports, which correspond to c in our model. The effectiveness of a control is represented as a percentage decrease in the size or frequency of losses. For example, operating a firewall ($2,960) is said to reduce losses by 80% [31]leading to a probability of breach after investment (β*) of 0.2β. We downloaded all of the cyber insurance filings in the state of California and discarded off-the-shelf policies that do not change the price based on revenue, industry or security controls. This left 26 different pricing algorithms and corresponding rate tables, the contents of which are described in [6]. Data breach coverage with a $1 million limit was selected because it is the default coverage and it comfortably covers the loss value l for a data breach on SMEs. The premium p and discount d varies based on the insurer. We chose a filing explicitly mentioning discounts for firewalls. For an information firm with $40M of revenue, the premium p is equal to $3,630 and the filings provide a range of discounts up to 25%. The exact value depends on an underwriter's subjective judgment. To comprise this we consider multiple discounts in this range. Estimating the insurer's cost of audit (a) is difficult because they could be conducted by loss adjusters within the firm or contracted out to IT specialists. With the latter in mind, we explored the cost of an information security audit. The cost depends on the depth of the assessment and the expertise of the assessor. However, collating the quoted figures suggests a range from $5, 000 up to $100, 000. Numerical Analysis We simulate the interaction between the cyber insurance policyholder and the insurer based on our game-theoretic model with parameter values described above. First, we compare the expected payoffs of the insurer for different strategic models: 1. the game-theoretic approach (GT) where the insurer chooses an appropriate strategy according to our analysis (refer to Figure 2) and can either audit or not audit; 2. always auditing (A,A) regardless of whether the policyholder has claimed discount or not; 3. always not auditing (NA,NA) regardless of whether the policyholder has claimed discount or not; 4. auditing if the policyholder has claimed discount and not auditing if there is no discount claimed (A,NA); 5. not auditing if the policyholder has claimed discount and auditing if there is no discount claimed (NA,A); 6. auditing half the times regardless of whether the policyholder has claimed discount or not (0.5A,0.5A); 7. auditing half the times when the policyholder has claimed discount and not auditing if there is no discount claimed (0.5A,NA). In the following simulation figures, the insurer's average payoffs with each strategic model are calculated against a policyholder who plays the PBE strategy obtained through our analysis. This policyholder is also the most challenging one for the insurer as it claims for a discount even in the case of non investment. The term "x repetitions of the game" reflects that CIAG is played x number of independent runs for a set of parameter values. From Figures 3a and 3b we observe that the payoff of the insurer when choosing the GT model is always better than rest of the strategic models irrespective of the premium discount. The reason for this is that the model (A,NA), where the insurer audits only policyholders who have claimed the discount, is susceptible to auditing clients who have implemented additional security level bearing the auditing cost as a pure loss. Thus, the larger the number of honest policyholders, the higher the insurer's loss is. Additionally, the insurer's loss as expected increases with the increasing cost of audit. With the (NA,NA) model, the insurer chooses to reimburse the loss without confirming the policyholder's actual security level. Here, the insurer indemnifies even for cases where the policyholder has misrepresented her security level suffering heavy losses. Another non strategic approach would be to randomize over the choice of auditing or not auditing a policyholder who has claimed a premium discount. This strategy, represented by the model ( The GT model presents an optimal mix of (A,NA) and (NA,NA) where the insurer's decision to audit is based on a prior belief regarding the policyholder's security investment. For the median loss of $170k which is greater than both the audit cost and premium discount, the game solution is derived from the upper-right section of the solution space in Figure 2. In particular, when the insurer's belief (φ) regarding the policyholder's security investment is greater than a threshold (ϕ * ), she prefers (NA,NA) i.e, PBE 2: (CD,CD),(NA,NA); ϕ > ϕ * . When the belief is lower than ϕ * , she prefers a mixed approach (PBE 3) by simultaneously relying on (A,NA) and (NA,NA) and choosing whichever is more profitable. The GT model, thus, enables the insurer to take into account a prior belief regarding the policyholder's security investment under the condition of information asymmetry and maximize her payoffs given this belief. The figures further show that regardless of how many times the game has been played model GT performs better than the non-game-theoretic models. With higher audit cost i.e., $100k in Figures 4a and 4b, we observe that the insurer's average payoff with the model (A,NA) decreases drastically confirming it's shortcomings as discussed above. In the case of 1500 independent repetitions for the highest values of audit cost and premium discount, the insurer gains, on average, a higher payoff when choosing GT as opposed to (NA,NA) model. The increased difference in the payoff is equivalent to 98% of the annual premium charged to policyholder. Next, the simulation results are obtained over 100 repetitions with a median loss of $170k against a range of audit cost, premium discount and loss. Note that the models (A,A), (NA,A), and (0.5A,0.5A) are omitted from the figures as they perform worse than others, and for ease of presentation. Figures 5a and 5b show that there is a point of convergence where the strategy largely doesn't matter, but then as the audit cost increases, there is motivation for playing the game-theoretic solution as any other solution is worse. As discount increases, a policyholder might be highly stim-ulated to receive premium discount given that the insurer will grant this without auditing her before an incident occurs. This escalates the possibilities of the policyholder misrepresenting her actual security level. Given this possibility, GT noticeably dominates other strategic models as seen in Figures 5c and 5d. Further, in the case of 100 independent repetitions with the highest values of premium discount and audit cost, deploying GT gives the insurer on average a higher payoff compared to the next best model which is (NA,NA). The increased difference in the payoff is equivalent to 60% of the annual premium charged to the policyholder. Remark 2: For a constant loss, as premium discount and audit cost increase, GT outperforms all other strategic models. Figure 6 shows that there is essentially nothing special about the loss as a contributing factor with low audit cost, but become discriminatory as the audit cost approaches the loss. GT and (NA,NA) performs equally well until this condition, but as the discount increases with the audit cost, GT exceeds (NA,NA). In this case, for 100 independent repetitions, insurers gain on average a higher payoff with model GT, compared to model (NA,NA), the next best. The increased difference is payoff is equivalent to 66% of the annual premium charged to the policyholder. Remark 3: As premium discount, audit cost, and loss increase, GT consistently outperforms all other strategic models. In summary, we have demonstrated how an insurer may use our framework in practice to determine the best auditing strategy against a policyholder. We have illustrated how the insurer's payoff is maximized by strategically choosing to audit or not in the event of a breach. Such strategic behaviour also allows the insurer to maximize her payoff against policyholders who can misrepresent their security levels to avail premium discount. Conclusion Speaking to cyber insurance providers reveals concerns about the discrepancy between the security policies applicants report that they follow, in the application process, and the applicant's compliance with these policies once coverage is in place. To address this, we developed a gametheoretic framework investigating audits as a mechanism to disincentivize misrepresentation of security level by policyholders. Thus far, we know of one instance [10] denying cyber insurance coverage due to non-compliance with the security practices as defined in the insurance contract. Although there could have been denials settled in private, this suggests that most cyber insurance providers follow the never audit strategy. Our analysis derived a gametheoretic strategy that outperforms naïve strategies, such as never audit. By considering the post-incident claims management process, we demonstrated how a cyber insurance market can avoid collapse (contradicting [26]) when the policyholder can fraudulently report their security level. To extend this paper, future work could consider modelling uncertainty about the effectiveness of the implemented security measure. In the current model, the policyholder's type is chosen by Nature according to some probability distribution. It could be extended such that the policyholder maximizes expected payoff by selecting an investment strategy based on the beliefs about her type. This consideration would extend, for example, our analysis to consider the overall utility function of the policyholder, that is considering both the investment and no investment types simultaneously, and maximizing the expected payoff. Another interesting direction is investigating how the potential loss l changes as a function of the security investment. In this case, we will be looking into different types of risk profiles of the policyholders. We could also investigate the trade-off between the additional investment, discount, and residual risk. Finally, a future extension could make investment in security a strategic choice for the policyholder in a multi-round game with a no claims bonus, as our data set describes the size of these discounts. We could also allow belief updates to influence insurer choices on each iteration.
5,971
1908.04867
2968869838
Abstract We introduce a game-theoretic model to investigate the strategic interaction between a cyber insurance policyholder whose premium depends on her self-reported security level and an insurer with the power to audit the security level upon receiving an indemnity claim. Audits can reveal fraudulent (or simply careless) policyholders not following reported security procedures, in which case the insurer can refuse to indemnify the policyholder. However, the insurer has to bear an audit cost even when the policyholders have followed the prescribed security procedures. As audits can be expensive, a key problem insurers face is to devise an auditing strategy to deter policyholders from misrepresenting their security levels to gain a premium discount. This decision-making problem was motivated by conducting interviews with underwriters and reviewing regulatory filings in the US; we discovered that premiums are determined by security posture, yet this is often self-reported and insurers are concerned by whether security procedures are practised as reported by the policyholders. To address this problem, we model this interaction as a Bayesian game of incomplete information and devise optimal auditing strategies for the insurers considering the possibility that the policyholder may misrepresent her security level. To the best of our knowledge, this work is the first theoretical consideration of post-incident claims management in cyber security. Our model captures the trade-off between the incentive to exaggerate security posture during the application process and the possibility of punishment for non-compliance with reported security policies. Simulations demonstrate that common sense techniques are not as efficient at providing effective cyber insurance audit decisions as the ones computed using game theory.
The timing of the insurer's intervention plays is an important strategic aspect. Ex-ante interventions for the insurer include risk assessments and security investments before the policy term begins. @cite_6 investigated an insurer who could assess security levels perfectly or not at all, concluding that the latter cannot support a functioning market. @cite_7 showed that ex-ante assessments in combination with discounts for adopting security controls can lead to an increase in social welfare. A more recent model introduces stochastic uncertainty about the policyholder's security level @cite_16 .
{ "abstract": [ "An insurer has to know the risks faced by a potential client to accurately determine an insurance premium offer. However, while the potential client might have a good understanding of its own security practices, it may also have an incentive not to disclose them honestly since the resulting information asymmetry could work in its favor. This information asymmetry engenders adverse selection, which can result in unfair premiums and reduced adoption of cyber-insurance. To overcome information asymmetry, insurers often require potential clients to self-report their risks. Still, clients do not have any incentive to perform thorough self-audits or to provide comprehensive reports. As a result, insurers have to complement self-reporting with external security audits to verify the clients’ reports. Since these audits can be very expensive, a key problem faced by insurers is to devise an auditing strategy that deters clients from dishonest reporting using a minimal number of audits. To solve this problem, we model the interactions between a potential client and an insurer as a two-player signaling game. One player represents the client, who knows its actual security-investment level, but may report any level to the insurer. The other player represents the insurer, who knows only the random distribution from which the security level was drawn, but may discover the actual level using an expensive audit. We study the players’ equilibrium strategies and provide numerical illustrations.", "", "This paper investigates how competitive cyber-insurers affect network security and welfare of the networked society. In our model, a user's probability to incur damage (from being attacked) depends on both his security and the network security, with the latter taken by individual users as given. First, we consider cyberinsurers who cannot observe (and thus, affect) individual user security. This asymmetric information causes moral hazard. Then, for most parameters, no equilibrium exists: the insurance market is missing. Even if an equilibrium exists, the insurance contract covers only a minor fraction of the damage; network security worsens relative to the no-insurance equilibrium. Second, we consider insurers with perfect information about their users' security. Here, user security is perfectly enforceable (zero cost); each insurance contract stipulates the required user security. The unique equilibrium contract covers the entire user damage. Still, for most parameters, network security worsens relative to the no-insurance equilibrium. Although cyber-insurance improves user welfare, in general, competitive cyber-insurers fail to improve network security." ], "cite_N": [ "@cite_16", "@cite_7", "@cite_6" ], "mid": [ "2889445070", "", "1516366611" ] }
Post-Incident Audits on Cyber Insurance Discounts
No amount of investment in security eliminates the risk of loss [1]. Driven by the frequency of cyber attacks, riskaverse organizations increasingly transfer residual risk by purchasing cyber insurance. As a result, the cyber-insurance market is predicted to grow to between $7.5 and $20 billion by 2020, as identified in [2]. Similar to other types of insurance, cyber-insurance providers pool the risk from multiple policyholders together and charge a premium to cover the underlying risk. Yet cyber risks like data breaches are qualitatively different from traditional lines like property insurance. For instance, buildings are built once according to building regulations, whereas computer systems continually change as mobile devices blur the network perimeter and software evolves with additional product features and security patches. Adversaries * corresponding author Email addresses: s.panda@surrey.ac.uk (Sakshyam Panda), daniel.woods@cs.ox.ac.uk (Daniel W Woods), alaszka@uh.edu (Aron Laszka), andrew.fielder@imperial.ac.uk (Andrew Fielder), e.panaousis@surrey.ac.uk (Emmanouil Panaousis) update strategies to exploit vulnerabilities emerging from technological flux. Further, the problems of moral hazard and adverse selection become more pressing. Adverse selection results from potential clients being more likely to seek insurance if they face a greater risk of loss. Meanwhile, information asymmetry limits insurers in assessing the applicant's risk. The risk depends on computer systems with many devices in different configurations, users with a range of goals, and idiosyncratic organizational security teams, policies, and employed controls. Collecting information is a costly procedure, let alone assessing and quantifying the corresponding risk. Moral hazard occurs when insureds engage in riskier behaviour in the knowledge that the insurer will indemnify any losses. Even if initial assessment reveals that security policies are in place, it is no guarantee that they will be followed given that "a significant number of security breaches result from employees' failure to comply with security policies" [3]. Technological compliance suffers too, as evidenced by the Equifax breach resulting from not patching a publicly known vulnerability [4]. Insurance companies collect risk information about applicants to address adverse selection. We interviewed 9 underwriters in the UK and found that 8 of them use selfreported application forms; 7 of them use telephone calls with the applicant; 3 of them use external audits; and only one uses on-site audits 1 . This suggests that the application process relies on accurate self-reporting of risk factors. Cyber insurance application forms collect information about questions ranging from generic business topics to questions related to information security controls [5]. Romanosky et al. [6] introduced a data set resulting from a US law requiring insurers to file documents describing their pricing schemes. Pricing algorithms depended on the applicant's industry, revenue, past-claims history, and-most relevant to this paper-the security controls employed by the organization. The insurer collects all this information and sets a price according to the formulas described in [6], reducing the premium when security controls are reported to be in place. This was corroborated by interviews with insurance professionals in Sweden [7]. Surprisingly, individual underwriters determine the size of the discount for security controls on a case-by-case basis, even though this can be as large as 25% of the premium. Moral hazard is generally addressed by including terms in the policy that insureds must follow for their coverage to be valid. An early study found that coverage was excluded for a "failure to take reasonable steps to maintain and upgrade security" [8]. A study from 2017 found few exclusions prescribing security procedures but the majority of policies contained exclusions for "dishonest acts" [6]. One such dishonest act is violating the principle of up-most good faith requiring insureds to not intentionally deceive the company offering the insurance [9]. This principle and the corresponding exclusion mitigates moral hazard, which might otherwise drive honest firms to de-prioritize compliance with security procedures. Further, it imposes a cost on fraudulent organizations claiming that entirely fictional security products are in place to receive a lower premium. For example, one insurer refused to pay out on a cyber policy because "security patches were no longer even available, much less implemented" [10] despite the application form reporting otherwise. We do not consider the legality of this case, but include it as evidence that insurers conduct audits to establish whether there are grounds for refusing coverage. Further, insurers offer discounts for insureds based on security posture and often rely on self-reports that security controls are in place. Interviewing insurers revealed concerns about whether security policies were being complied with in reality. Besides, larger premium discounts increase the incentive to misrepresent security levels potentially necessitating a higher frequency of investigation 1 Note that these are mutually inclusive events. which is uneconomical for insurers. To explore how often should insurers audit cyber insurance claims, we develop a game-theoretic model that takes into account relevant parameters from pricing data collected by analyzing 26 cyber insurance pricing schemes filed in California and identifies different optimal auditing strategies for insurers. Our analytical approach relies on Perfect Bayesian Equilibrium (PBE). We complement our analysis with simulation results with parameter values from the collected data. We further make "common sense" assumptions regarding auditing strategies and show that in general, insurers are better-off with the game-theoretic strategies. The results will be of interest to policymakers in the United States and the European Union, who believe cyber insurance can improve security levels by offering premium discounts [11]. The remainder of this paper is organized as follows. Section 2 identifies existing approaches to modeling the cyber insurance market. We introduce our game-theoretic model in Section 3 and present the analysis in Section 4. Section 5 details the our methodology for data collection which instantiate our simulation results. Finally, we end with concluding remarks in Section 6. Model We model the interaction between the policyholder P and insurer I as a one-shot dynamic game called the Cyber Insurance Audit Game (CIAG), which is represented in Figure 1. Each decision node of the tree represents a state where the player with the move has to choose an action. The leaf nodes present the payoffs of the players for the sequence of chosen actions. The payoffs are represented in the format x y , where x and y are the payoffs of P and I, respectively. Table 1 presents the list of symbols used in our model. Note that the initial wealth of the policyholder (W ) and the premium for insurance coverage (p) are omitted from the tree for ease of presentation. We assume that the policyholder does not make a decision regarding its security investment in our model, because that decision has been made before seeking insurance. Hence, a particular applicant has a certain fixed type (with respect to security), but the insurer does not know the type of an applicant due to information asymmetry. We can model the insurer's uncertainty by assuming that it encounters certain types of applicants with certain probabilities. The type of the policyholder is modeled as an outcome of a random event, that is, nature (N) decides the policyholder's type with respect to additional security investments, i.e., P S represents one with additional security investments and P N one without. Further, nature also decides whether a security incident occurs for each policyholder, represented as B (breach) and NB (no breach). The probability of an incident depends on the type of the policyholder. Nature moves first by randomly choosing the policyholder's type according to a known a priori distribution: P S with probability Pr(P S ) = ϕ and P N with probability Pr(P N ) = 1 − ϕ, ϕ ∈ [0, 1]. The type is private to a policyholder and the insurer knows only the probability distribution over the different types. Hence, the game is of incomplete information. Regardless of the types, the policyholder's actions are CD (claim premium discount) and NC (no discount claim). Nature then decides the occurrence of the breach on a policyholder, followed by the insurer's decision to audit (A) or not audit (NA) only in the event of a breach. We assume that in CIAG, an audit investigates the misrepresentation of the cyber security investment and the claim for receiving a premium discount. In particular, it investigates whether the policyholder had indeed invested in cyber security countermeasures before claiming this discount. Our model does not assume that there is a particular type of audit. Having described the players and actions, in the following we present the interaction between P and I. First, P has signed up for a cyber insurance contract by paying a pre- mium p. The type of P is decided by the nature based on an additional security investment. We assume that this investment equals c. This investment will decrease the probability of P being compromised from β to β * . N P S N I d − c −l − d − a A d − c −l − d NA B d − c −d NB CD N I −c −l − a A −c −l NA B −c 0 NB NC ϕ P N N I d − l −d − a A d −l − d NA B d −d NB CD N I 0 −l − a At the same time the investment will enable P to claim a premium discount d. We assume that I offers d without performing any audit since investigating at this point would mean that I would have to audit policyholders who might never file an indemnity claim, thereby incurring avoidable losses. We further assume that if P decides to claim a discount without making the security investment, she will still receive d but risks having a future claim denied after an audit. After an incident, where P suffers loss l, insurer I has to decide whether to conduct an audit (e.g. forensics) to investigate details of the incident including the security level of P at the time of breach. We assume that this audit costs a to the insurer. This audit will result in: Case 1: confirming that P has indeed invested in security as claimed, in which case I will pay the indemnity. We assume full coverage so the indemnity payment equals l. Case 2: discovering that P has misrepresented her security level, I refuses to pay the indemnity and P has to bear the incident cost l. We assume that this case falls within the contract period during which P is locked-in by the contract. We define misrepresentation as when P is fraudulent or simply careless in maintaining the prescribed security level in the insurance contract and reports a fabricated security level to get the premium discount. In Figure 1, some decision nodes of I are connected through dotted lines indicating that the I cannot distinguish between the connected nodes due to unknown P type. These sets of decision nodes define the information sets of the insurer. An information set is a set of one or more decision nodes of a player that determines the possible subse-quent moves of the player conditioning on what the player has observed so far in the game. The insurer also has two information sets, one where the breach has occurred to the policyholder who has claimed premium discount CD= {(CD|P S ), (CD|P N )} and the one where the breach has occurred to the policyholder who has not claimed premium discount NC= {(NC|P S ), (NC|P N )}. Each of the insurer's information sets has two separate nodes since the insurer does not know the real type of the policyholder when deciding on whether to audit or not. In outcome CD,B,A, the expected utility of the P S is U PS CD,B,A = U (W − p + d − c), where U is a utility function, which we assume to be monotonically increasing and concave, W is the policyholder's initial wealth, p is the premium paid to the insurer, d is the premium discount, and c is the cost of the security investment. We assume the utility function to be concave to model the risk aversion of policyholders as defined in [12]. Note that we assume that W > p > d and W − p + d > c, and both W and p are exogenous to our model. U PS CD,B,A = U PS CD,B,NA = U PS CD,NB = U (W − p + d − c) (1) U PS NC,B,A = U PS NC,B,NA = U PS NC,NB = U (W − p − c) (2) U PN CD,B,A = U (W − p + d − l) (3) U PN CD,B,NA = U PN CD,NB = U (W − p + d) (4) U PN NC,B,A = U PN NC,B,NA = U PN NC,NB = U (W − p)(5) We further assume that the policyholder's goal is to maximize her expected utility. The expected utility of the policyholder is influenced by the possibility of a breach and the insurer's probability to audit. In particular, the expected utility for P S will be the same regardless of the insurer's probability to audit and the breach probability due to indemnification. P N , however, will need to consider these probabilities. In the outcome P S ,CD,B,A the insurer's utility is U I PS,CD,B,A = p − l − d − a, where p is the premium, d is the premium discount offered, l is the loss claimed by the policyholder, and a is the audit cost. In other outcomes, the insurer's utility is as follows: U I PS,CD,B,A = p − l − d − a(6)U I PN,CD,B,A = p − d − a(11) Decision Analysis In this section, we analyze the equilibria of the proposed Cyber Insurance Audit Game (Figure 1), which is a dynamic Bayesian game with incomplete information. The analysis is conducted using the game-theoretic concept of Perfect Bayesian Equilibrium (PBE). This provides insights into the strategic behaviour of the policyholder P concerning discount claims and the insurer I's auditing decision. A PBE, in the context of our game, can be defined by Bayes requirements discussed in [30]: Requirement 1: The player at the time of play must have a belief about which node of the information set has been reached in the game. The beliefs must be calculated using Bayes' rule, whenever possible, ensuring that they are consistent throughout the analysis. Requirement 2: Given these beliefs, a player's strategy must be sequentially rational. A strategy profile is said to be sequentially rational if and only if the action taken by the player with the move is optimal against the strategies played by all other opponents given the player's belief at that information set. Requirement 3: The player must update her beliefs at the PBE to remove any implausible equilibria. These beliefs are determined by Bayes' rule and players' equilibrium strategies. In the event of a security breach, the insurer's decision to audit or not must be based on beliefs regarding the policyholder's types. More specifically, a belief is defined as a probability distribution over the nodes within the insurer's information set, conditioned that the information node has been reached. The insurer has two information sets subjected to whether the policyholder has claimed premium discount or not which are CD= {(CD|P S ), (CD|P N )} and NC= {(NC|P S ), (NC|P N )}. The insurer assigns a belief to each of these information sets. Let µ and λ be the insurer's beliefs where µ = Pr(P S |CD) λ = Pr(P S |NC) That is, for the first information set, the insurer believes with µ and 1 − µ that the premium discount claim is from P S and P N , respectively. Similarly, for the second information set, the insurer believes with λ that P S has not claimed premium discount and believes with 1 − λ that P N has not claimed premium discount. The first requirement of PBE dictates that Bayes' rule should be used to determine beliefs. Thus From the payoffs in Figure 1, it can be clearly seen that CD is always a preferred choice for P S . Whereas, the insurer always gets a better payoff for choosing NA against NC irrespective of policyholder's type. Having defined the necessary concepts, next, we identify the possible PBEs of the game for the following constraints l > a and l > d (15) l > a and l < d (16) l < a and l > d (17) l < a and l < d (18) where the PBEs are strategy profiles and beliefs that satisfies all the three requirements described earlier. Theorem 1. For ϕ > l−a l , l > a and l > d, CIAG has only one pure-strategy PBE (CD,CD),(NA,NA) , in which the policyholder claims premium discount regardless of her type while the insurer does not audit regardless of whether the policyholder claims or not a discount, with µ = ϕ and arbitrary λ ∈ [0, 1]. Proof. The existence of pure-strategy PBE can be verified by examining the strategy profile (CD,CD) and (NA,NA) with constraint in Equation (15). This represents the case where an incident has occurred on the policyholders who have claimed premium discount. a) Belief consistency: Due to information asymmetry and as only one of the insurer's information set is in the equilibrium path, she assigns Pr(CD|P S ) = 1 and Pr(CD|P N ) = 1. Thus, using Bayes' rule in Equation (13) gives µ = ϕ/(ϕ + 1 − ϕ) = ϕ On the other hand, applying Bayes' rule in Equation (14) to λ yields 0/0 which is an indeterminate result. This implies that if the equilibrium is actually played then the off-equilibrium information set NC should not be reached restricting an update to the insurer's belief with Bayes' rule. Due to an indeterminate result, the insurer specifies an arbitrary = ϕ(p − d − l − a) + (1 − ϕ)(p − d − a) = p − ϕl − d − a U NA = ϕ · U I PS,CD,B,NA + (1 − ϕ) · U I PN,CD,B,NA(19)= ϕ(p − d − l) + (1 − ϕ)(p − d − l) = p − d − l(20) The condition for A to be sequentially rational is U A > U NA which gives p − ϕl − d − a > p − d − l ϕ ≤ l − a l = ϕ *(21) Now considering the off-equilibrium information set NC, the insurer always gets a better payoff by choosing NA. Thus, NA is a dominant strategy of the insurer against the off-equilibrium information set NC. The insurer's belief λ remains arbitrary. c) Policyholder's sequentially rational condition given insurer's best response: Knowing the best responses of the insurer i.e. (A,NA) for ϕ ≤ ϕ * and (NA,NA) for ϕ > ϕ * against CD, we derive the best response of the policyholder. For insurer's strategy profile (NA,NA), P S gets a payoff U (W − p + d − c) by choosing CD. If she deviates to NC, she will get a payoff U (W −p−c) which is undesirable. Whereas, P N receives a payoff U (W − p + d) by choosing CD. If she deviates to NC will get a payoff U (W − p) which is also undesirable. Thus, (CD,CD) and (NA,NA) can be verified as a PBE given ϕ > ϕ * and µ = ϕ. Note that the PBE includes the updated beliefs of the insurer implicitly satisfying Requirement 3. From the PBE, we can see that if l > a, l > d and insurer's belief ϕ is greater than the threshold value ϕ * , not auditing a breach is optimal for the insurer and claiming premium discount is optimal for the policyholder regardless of her type. When the insurer's belief ϕ ≤ ϕ * there exist no pure-strategy PBE. As a result, both players will mix up their strategies. We discuss this mixed-strategy PBE below. Note that, in the following, we use the inner tuple (x, 1 − x) to indicate a mixed strategy where the player chooses the first action with probability x and the second action with probability 1 − x. Theorem 2. For ϕ ≤ l−a l , l > a, l > d, CIAG has only one mixed-strategy PBE, in which: • P S will always prefer CD, while P N randomizes between CD and NC with probability δ and 1 − δ, respectively; • the insurer randomizes between A and NA with probability θ and 1 − θ, respectively, against CD, and she always prefer NA against NC, with her beliefs about P S playing CD and NC being µ = ϕ ϕ+(1−ϕ)δ and λ = 0, respectively, where δ = a (1 − ϕ)l θ = U (W − p + d) − U (W − p) β · U (W − p + d) − U (W − p + d − l)(22) Proof. The existence of mixed-strategy PBE is outlined below. a) Belief consistency: Again we apply the Bayes' rule. By assuming that the policyholder sticks to the equilibrium strategy, the insurer can derive that Pr(CD|P S ) = 1, Pr(NC|P S ) = 0, Pr(P S ) = ϕ, Pr(P N ) = 1 − ϕ, Pr(CD|P N ) = δ and Pr(NC|P N ) = 1 − δ. Using Equations (13) and (14) we obtain µ = µ and λ = λ in Equation (2). b) Optimal responses given beliefs and opponent's strategy: Given these beliefs and the mixed strategy of the policyholder, an insurer's optimal strategy would maximize her payoff. The insurer can achieve this by randomizing her actions such that the expected payoffs is equal for all the actions of the policyholder. This is known as the indifference principle in game theory. Thus, the expected utility of P N for choosing CD is U PN CD =β · θ · U PN CD,B,A + (1 − θ) · U PN CD,B,NA + (1 − β) · U PN CD,NB =β · θ · U (W − p + d − l) + β · (1 − θ) · U (W − p + d) + (1 − β) · U (W − p + d) =β · θ · U (W − p + d − l) − U (W − p + d) + U (W − p + d)(23) and for choosing NC, where NA is a dominating strategy of the insurer, is U PN NC = β · U PN NC,B,NA + (1 − β) · U PN NC,NB = β · U (W − p) + (1 − β) · U (W − p) = U (W − p)(24) The indifference principle requires that U P N NC = U P N CD , which gives U (W − p) = β ·θ· U (W − p + d − l) − U (W − p + d) + U (W − p + d) θ = U (W − p + d) − U (W − p) β · U (W − p + d) − U (W − p + d − l) as in Equation (22). Similarly, the policyholder will also mix her strategy with an aim to make the insurer indifferent between choosing A and NA. Thus, U I A = ϕ · U I PS,CD,B,A + (1 − ϕ) · U I PNCD,B,A + U I PNNC,B,A = ϕ (1)(p − d − l − a) + (0)(p − l − a) + (1 − ϕ) δ(p − d − a) + (1 − δ)(p − l − a) = p − l − a − ϕd − δd + δl + ϕδd − ϕδl (25) U NA = ϕ · U I PS,CD,B,NA + (1 − ϕ) · U I PNCD,B,NA + U I PNNC,B,NA = ϕ (1)(p − d − l) + (0)(p − l) + (1 − ϕ) δ(p − d − l) + (1 − δ)(p − l) = p − l − ϕd − δd + ϕδd(26) and U A = U NA gives (22). We conceive all the possible PBEs for CIAG by exhaustively applying this methodology over all combinations of the players' strategy profiles for the four constraints described in Equations (15) to (18). Figure 2 presents the solution space of CIAG. It further shows how the equilibrium strategies of the players depends on the premium discount (d), audit cost (a), and loss (l). p − l − a − ϕd − δd + δl + ϕδd − ϕδl =p − l − ϕd − δd + ϕδd δ = a (1 − ϕ)l as in Equation Model Evaluation Our analysis in Section 4 provides a framework for insurers to determine optimal auditing strategy against policyholders who can misrepresent their security levels to avail premium discounts. This section illustrates the methodology used to obtain values for various parameters of our model and simulation results using these values to determine the best strategy for the insurer. Methodology and Data Collection A diverse set of data sources is needed to study the interaction between insurance pricing, the effectiveness of security controls, and the cost of auditing claims. To this end, we combine the following data sources: a US law requiring insurers to report pricing algorithms [6], analysis of a data set of over 12, 000 cyber events [2], a study of the cost and effectiveness of security controls [31], and a range of informal estimates regarding the cost of an information security audit. The model assumes that nature determines incidents according to a Bernoulli distribution with loss amount l and probability of loss β. Analysis of the data set of 12, 000 cyber incidents reveals data breach incidents occur with a median loss $170K and frequency of around 0.015 for information firms [2], which we use as l and β respectively. We adopt the security control model used in [31]. Both fixed and operational costs are estimated using industry reports, which correspond to c in our model. The effectiveness of a control is represented as a percentage decrease in the size or frequency of losses. For example, operating a firewall ($2,960) is said to reduce losses by 80% [31]leading to a probability of breach after investment (β*) of 0.2β. We downloaded all of the cyber insurance filings in the state of California and discarded off-the-shelf policies that do not change the price based on revenue, industry or security controls. This left 26 different pricing algorithms and corresponding rate tables, the contents of which are described in [6]. Data breach coverage with a $1 million limit was selected because it is the default coverage and it comfortably covers the loss value l for a data breach on SMEs. The premium p and discount d varies based on the insurer. We chose a filing explicitly mentioning discounts for firewalls. For an information firm with $40M of revenue, the premium p is equal to $3,630 and the filings provide a range of discounts up to 25%. The exact value depends on an underwriter's subjective judgment. To comprise this we consider multiple discounts in this range. Estimating the insurer's cost of audit (a) is difficult because they could be conducted by loss adjusters within the firm or contracted out to IT specialists. With the latter in mind, we explored the cost of an information security audit. The cost depends on the depth of the assessment and the expertise of the assessor. However, collating the quoted figures suggests a range from $5, 000 up to $100, 000. Numerical Analysis We simulate the interaction between the cyber insurance policyholder and the insurer based on our game-theoretic model with parameter values described above. First, we compare the expected payoffs of the insurer for different strategic models: 1. the game-theoretic approach (GT) where the insurer chooses an appropriate strategy according to our analysis (refer to Figure 2) and can either audit or not audit; 2. always auditing (A,A) regardless of whether the policyholder has claimed discount or not; 3. always not auditing (NA,NA) regardless of whether the policyholder has claimed discount or not; 4. auditing if the policyholder has claimed discount and not auditing if there is no discount claimed (A,NA); 5. not auditing if the policyholder has claimed discount and auditing if there is no discount claimed (NA,A); 6. auditing half the times regardless of whether the policyholder has claimed discount or not (0.5A,0.5A); 7. auditing half the times when the policyholder has claimed discount and not auditing if there is no discount claimed (0.5A,NA). In the following simulation figures, the insurer's average payoffs with each strategic model are calculated against a policyholder who plays the PBE strategy obtained through our analysis. This policyholder is also the most challenging one for the insurer as it claims for a discount even in the case of non investment. The term "x repetitions of the game" reflects that CIAG is played x number of independent runs for a set of parameter values. From Figures 3a and 3b we observe that the payoff of the insurer when choosing the GT model is always better than rest of the strategic models irrespective of the premium discount. The reason for this is that the model (A,NA), where the insurer audits only policyholders who have claimed the discount, is susceptible to auditing clients who have implemented additional security level bearing the auditing cost as a pure loss. Thus, the larger the number of honest policyholders, the higher the insurer's loss is. Additionally, the insurer's loss as expected increases with the increasing cost of audit. With the (NA,NA) model, the insurer chooses to reimburse the loss without confirming the policyholder's actual security level. Here, the insurer indemnifies even for cases where the policyholder has misrepresented her security level suffering heavy losses. Another non strategic approach would be to randomize over the choice of auditing or not auditing a policyholder who has claimed a premium discount. This strategy, represented by the model ( The GT model presents an optimal mix of (A,NA) and (NA,NA) where the insurer's decision to audit is based on a prior belief regarding the policyholder's security investment. For the median loss of $170k which is greater than both the audit cost and premium discount, the game solution is derived from the upper-right section of the solution space in Figure 2. In particular, when the insurer's belief (φ) regarding the policyholder's security investment is greater than a threshold (ϕ * ), she prefers (NA,NA) i.e, PBE 2: (CD,CD),(NA,NA); ϕ > ϕ * . When the belief is lower than ϕ * , she prefers a mixed approach (PBE 3) by simultaneously relying on (A,NA) and (NA,NA) and choosing whichever is more profitable. The GT model, thus, enables the insurer to take into account a prior belief regarding the policyholder's security investment under the condition of information asymmetry and maximize her payoffs given this belief. The figures further show that regardless of how many times the game has been played model GT performs better than the non-game-theoretic models. With higher audit cost i.e., $100k in Figures 4a and 4b, we observe that the insurer's average payoff with the model (A,NA) decreases drastically confirming it's shortcomings as discussed above. In the case of 1500 independent repetitions for the highest values of audit cost and premium discount, the insurer gains, on average, a higher payoff when choosing GT as opposed to (NA,NA) model. The increased difference in the payoff is equivalent to 98% of the annual premium charged to policyholder. Next, the simulation results are obtained over 100 repetitions with a median loss of $170k against a range of audit cost, premium discount and loss. Note that the models (A,A), (NA,A), and (0.5A,0.5A) are omitted from the figures as they perform worse than others, and for ease of presentation. Figures 5a and 5b show that there is a point of convergence where the strategy largely doesn't matter, but then as the audit cost increases, there is motivation for playing the game-theoretic solution as any other solution is worse. As discount increases, a policyholder might be highly stim-ulated to receive premium discount given that the insurer will grant this without auditing her before an incident occurs. This escalates the possibilities of the policyholder misrepresenting her actual security level. Given this possibility, GT noticeably dominates other strategic models as seen in Figures 5c and 5d. Further, in the case of 100 independent repetitions with the highest values of premium discount and audit cost, deploying GT gives the insurer on average a higher payoff compared to the next best model which is (NA,NA). The increased difference in the payoff is equivalent to 60% of the annual premium charged to the policyholder. Remark 2: For a constant loss, as premium discount and audit cost increase, GT outperforms all other strategic models. Figure 6 shows that there is essentially nothing special about the loss as a contributing factor with low audit cost, but become discriminatory as the audit cost approaches the loss. GT and (NA,NA) performs equally well until this condition, but as the discount increases with the audit cost, GT exceeds (NA,NA). In this case, for 100 independent repetitions, insurers gain on average a higher payoff with model GT, compared to model (NA,NA), the next best. The increased difference is payoff is equivalent to 66% of the annual premium charged to the policyholder. Remark 3: As premium discount, audit cost, and loss increase, GT consistently outperforms all other strategic models. In summary, we have demonstrated how an insurer may use our framework in practice to determine the best auditing strategy against a policyholder. We have illustrated how the insurer's payoff is maximized by strategically choosing to audit or not in the event of a breach. Such strategic behaviour also allows the insurer to maximize her payoff against policyholders who can misrepresent their security levels to avail premium discount. Conclusion Speaking to cyber insurance providers reveals concerns about the discrepancy between the security policies applicants report that they follow, in the application process, and the applicant's compliance with these policies once coverage is in place. To address this, we developed a gametheoretic framework investigating audits as a mechanism to disincentivize misrepresentation of security level by policyholders. Thus far, we know of one instance [10] denying cyber insurance coverage due to non-compliance with the security practices as defined in the insurance contract. Although there could have been denials settled in private, this suggests that most cyber insurance providers follow the never audit strategy. Our analysis derived a gametheoretic strategy that outperforms naïve strategies, such as never audit. By considering the post-incident claims management process, we demonstrated how a cyber insurance market can avoid collapse (contradicting [26]) when the policyholder can fraudulently report their security level. To extend this paper, future work could consider modelling uncertainty about the effectiveness of the implemented security measure. In the current model, the policyholder's type is chosen by Nature according to some probability distribution. It could be extended such that the policyholder maximizes expected payoff by selecting an investment strategy based on the beliefs about her type. This consideration would extend, for example, our analysis to consider the overall utility function of the policyholder, that is considering both the investment and no investment types simultaneously, and maximizing the expected payoff. Another interesting direction is investigating how the potential loss l changes as a function of the security investment. In this case, we will be looking into different types of risk profiles of the policyholders. We could also investigate the trade-off between the additional investment, discount, and residual risk. Finally, a future extension could make investment in security a strategic choice for the policyholder in a multi-round game with a no claims bonus, as our data set describes the size of these discounts. We could also allow belief updates to influence insurer choices on each iteration.
5,971
1908.04867
2968869838
Abstract We introduce a game-theoretic model to investigate the strategic interaction between a cyber insurance policyholder whose premium depends on her self-reported security level and an insurer with the power to audit the security level upon receiving an indemnity claim. Audits can reveal fraudulent (or simply careless) policyholders not following reported security procedures, in which case the insurer can refuse to indemnify the policyholder. However, the insurer has to bear an audit cost even when the policyholders have followed the prescribed security procedures. As audits can be expensive, a key problem insurers face is to devise an auditing strategy to deter policyholders from misrepresenting their security levels to gain a premium discount. This decision-making problem was motivated by conducting interviews with underwriters and reviewing regulatory filings in the US; we discovered that premiums are determined by security posture, yet this is often self-reported and insurers are concerned by whether security procedures are practised as reported by the policyholders. To address this problem, we model this interaction as a Bayesian game of incomplete information and devise optimal auditing strategies for the insurers considering the possibility that the policyholder may misrepresent her security level. To the best of our knowledge, this work is the first theoretical consideration of post-incident claims management in cyber security. Our model captures the trade-off between the incentive to exaggerate security posture during the application process and the possibility of punishment for non-compliance with reported security policies. Simulations demonstrate that common sense techniques are not as efficient at providing effective cyber insurance audit decisions as the ones computed using game theory.
The literature on economic theory of insurance fraud has developed two main approaches: and @cite_2 . The costly state falsification approach assesses the client's behaviour towards a claim. We consider the costly state verification approach, which focuses on the insurer identifying fraudulent claims. The insurer can verify the claims via auditing but has to bear a verification cost. The optimal claim handling usually involves random auditing @cite_30 .
{ "abstract": [ "Abstract This paper characterizes the equilibrium of an insurance market where opportunist policyholders may file fraudulent claims. We assume that insurance policies are traded in a competitive market where insurers cannot distinguish honest policyholders from opportunists. The insurer-policyholder relationship is modelled as an incomplete information game, in which the insurer decides to audit or not. The market equilibrium depends on whether insurers can credibly commit or not to their audit strategies. We show that a no commitment equilibrium results in a welfare loss for honest individuals, which may even be so large that the insurance market completely shuts down. We also show that transferring monitoring costs to a budget-balanced common agency would mitigate the commitment problem.", "We survey recent developments in the economic analysis of insurance fraud. The paper first sets out the two main approaches to insurance fraud that have been developped in the literature, namely the costly state verification and the costly state falsification. Under costly state verification, the insurer can verify claims at some cost. Claims' verification may be deterministic or random, and it can be conditioned on fraud signals perceived by insurers. Under costly state falsification, the policyholder expends resources for the building-up of his or her claim not to be detected. We also consider the effects of adverse selection, in a context where insurers cannot distinguish honest policyholders from potential defrauders, as well as the consequences of credibility constraints on anti-fraud policies. Finally, we focus attention on the risk of collusion between policyholders and insurance agents or service providers." ], "cite_N": [ "@cite_30", "@cite_2" ], "mid": [ "1963731024", "1587323911" ] }
Post-Incident Audits on Cyber Insurance Discounts
No amount of investment in security eliminates the risk of loss [1]. Driven by the frequency of cyber attacks, riskaverse organizations increasingly transfer residual risk by purchasing cyber insurance. As a result, the cyber-insurance market is predicted to grow to between $7.5 and $20 billion by 2020, as identified in [2]. Similar to other types of insurance, cyber-insurance providers pool the risk from multiple policyholders together and charge a premium to cover the underlying risk. Yet cyber risks like data breaches are qualitatively different from traditional lines like property insurance. For instance, buildings are built once according to building regulations, whereas computer systems continually change as mobile devices blur the network perimeter and software evolves with additional product features and security patches. Adversaries * corresponding author Email addresses: s.panda@surrey.ac.uk (Sakshyam Panda), daniel.woods@cs.ox.ac.uk (Daniel W Woods), alaszka@uh.edu (Aron Laszka), andrew.fielder@imperial.ac.uk (Andrew Fielder), e.panaousis@surrey.ac.uk (Emmanouil Panaousis) update strategies to exploit vulnerabilities emerging from technological flux. Further, the problems of moral hazard and adverse selection become more pressing. Adverse selection results from potential clients being more likely to seek insurance if they face a greater risk of loss. Meanwhile, information asymmetry limits insurers in assessing the applicant's risk. The risk depends on computer systems with many devices in different configurations, users with a range of goals, and idiosyncratic organizational security teams, policies, and employed controls. Collecting information is a costly procedure, let alone assessing and quantifying the corresponding risk. Moral hazard occurs when insureds engage in riskier behaviour in the knowledge that the insurer will indemnify any losses. Even if initial assessment reveals that security policies are in place, it is no guarantee that they will be followed given that "a significant number of security breaches result from employees' failure to comply with security policies" [3]. Technological compliance suffers too, as evidenced by the Equifax breach resulting from not patching a publicly known vulnerability [4]. Insurance companies collect risk information about applicants to address adverse selection. We interviewed 9 underwriters in the UK and found that 8 of them use selfreported application forms; 7 of them use telephone calls with the applicant; 3 of them use external audits; and only one uses on-site audits 1 . This suggests that the application process relies on accurate self-reporting of risk factors. Cyber insurance application forms collect information about questions ranging from generic business topics to questions related to information security controls [5]. Romanosky et al. [6] introduced a data set resulting from a US law requiring insurers to file documents describing their pricing schemes. Pricing algorithms depended on the applicant's industry, revenue, past-claims history, and-most relevant to this paper-the security controls employed by the organization. The insurer collects all this information and sets a price according to the formulas described in [6], reducing the premium when security controls are reported to be in place. This was corroborated by interviews with insurance professionals in Sweden [7]. Surprisingly, individual underwriters determine the size of the discount for security controls on a case-by-case basis, even though this can be as large as 25% of the premium. Moral hazard is generally addressed by including terms in the policy that insureds must follow for their coverage to be valid. An early study found that coverage was excluded for a "failure to take reasonable steps to maintain and upgrade security" [8]. A study from 2017 found few exclusions prescribing security procedures but the majority of policies contained exclusions for "dishonest acts" [6]. One such dishonest act is violating the principle of up-most good faith requiring insureds to not intentionally deceive the company offering the insurance [9]. This principle and the corresponding exclusion mitigates moral hazard, which might otherwise drive honest firms to de-prioritize compliance with security procedures. Further, it imposes a cost on fraudulent organizations claiming that entirely fictional security products are in place to receive a lower premium. For example, one insurer refused to pay out on a cyber policy because "security patches were no longer even available, much less implemented" [10] despite the application form reporting otherwise. We do not consider the legality of this case, but include it as evidence that insurers conduct audits to establish whether there are grounds for refusing coverage. Further, insurers offer discounts for insureds based on security posture and often rely on self-reports that security controls are in place. Interviewing insurers revealed concerns about whether security policies were being complied with in reality. Besides, larger premium discounts increase the incentive to misrepresent security levels potentially necessitating a higher frequency of investigation 1 Note that these are mutually inclusive events. which is uneconomical for insurers. To explore how often should insurers audit cyber insurance claims, we develop a game-theoretic model that takes into account relevant parameters from pricing data collected by analyzing 26 cyber insurance pricing schemes filed in California and identifies different optimal auditing strategies for insurers. Our analytical approach relies on Perfect Bayesian Equilibrium (PBE). We complement our analysis with simulation results with parameter values from the collected data. We further make "common sense" assumptions regarding auditing strategies and show that in general, insurers are better-off with the game-theoretic strategies. The results will be of interest to policymakers in the United States and the European Union, who believe cyber insurance can improve security levels by offering premium discounts [11]. The remainder of this paper is organized as follows. Section 2 identifies existing approaches to modeling the cyber insurance market. We introduce our game-theoretic model in Section 3 and present the analysis in Section 4. Section 5 details the our methodology for data collection which instantiate our simulation results. Finally, we end with concluding remarks in Section 6. Model We model the interaction between the policyholder P and insurer I as a one-shot dynamic game called the Cyber Insurance Audit Game (CIAG), which is represented in Figure 1. Each decision node of the tree represents a state where the player with the move has to choose an action. The leaf nodes present the payoffs of the players for the sequence of chosen actions. The payoffs are represented in the format x y , where x and y are the payoffs of P and I, respectively. Table 1 presents the list of symbols used in our model. Note that the initial wealth of the policyholder (W ) and the premium for insurance coverage (p) are omitted from the tree for ease of presentation. We assume that the policyholder does not make a decision regarding its security investment in our model, because that decision has been made before seeking insurance. Hence, a particular applicant has a certain fixed type (with respect to security), but the insurer does not know the type of an applicant due to information asymmetry. We can model the insurer's uncertainty by assuming that it encounters certain types of applicants with certain probabilities. The type of the policyholder is modeled as an outcome of a random event, that is, nature (N) decides the policyholder's type with respect to additional security investments, i.e., P S represents one with additional security investments and P N one without. Further, nature also decides whether a security incident occurs for each policyholder, represented as B (breach) and NB (no breach). The probability of an incident depends on the type of the policyholder. Nature moves first by randomly choosing the policyholder's type according to a known a priori distribution: P S with probability Pr(P S ) = ϕ and P N with probability Pr(P N ) = 1 − ϕ, ϕ ∈ [0, 1]. The type is private to a policyholder and the insurer knows only the probability distribution over the different types. Hence, the game is of incomplete information. Regardless of the types, the policyholder's actions are CD (claim premium discount) and NC (no discount claim). Nature then decides the occurrence of the breach on a policyholder, followed by the insurer's decision to audit (A) or not audit (NA) only in the event of a breach. We assume that in CIAG, an audit investigates the misrepresentation of the cyber security investment and the claim for receiving a premium discount. In particular, it investigates whether the policyholder had indeed invested in cyber security countermeasures before claiming this discount. Our model does not assume that there is a particular type of audit. Having described the players and actions, in the following we present the interaction between P and I. First, P has signed up for a cyber insurance contract by paying a pre- mium p. The type of P is decided by the nature based on an additional security investment. We assume that this investment equals c. This investment will decrease the probability of P being compromised from β to β * . N P S N I d − c −l − d − a A d − c −l − d NA B d − c −d NB CD N I −c −l − a A −c −l NA B −c 0 NB NC ϕ P N N I d − l −d − a A d −l − d NA B d −d NB CD N I 0 −l − a At the same time the investment will enable P to claim a premium discount d. We assume that I offers d without performing any audit since investigating at this point would mean that I would have to audit policyholders who might never file an indemnity claim, thereby incurring avoidable losses. We further assume that if P decides to claim a discount without making the security investment, she will still receive d but risks having a future claim denied after an audit. After an incident, where P suffers loss l, insurer I has to decide whether to conduct an audit (e.g. forensics) to investigate details of the incident including the security level of P at the time of breach. We assume that this audit costs a to the insurer. This audit will result in: Case 1: confirming that P has indeed invested in security as claimed, in which case I will pay the indemnity. We assume full coverage so the indemnity payment equals l. Case 2: discovering that P has misrepresented her security level, I refuses to pay the indemnity and P has to bear the incident cost l. We assume that this case falls within the contract period during which P is locked-in by the contract. We define misrepresentation as when P is fraudulent or simply careless in maintaining the prescribed security level in the insurance contract and reports a fabricated security level to get the premium discount. In Figure 1, some decision nodes of I are connected through dotted lines indicating that the I cannot distinguish between the connected nodes due to unknown P type. These sets of decision nodes define the information sets of the insurer. An information set is a set of one or more decision nodes of a player that determines the possible subse-quent moves of the player conditioning on what the player has observed so far in the game. The insurer also has two information sets, one where the breach has occurred to the policyholder who has claimed premium discount CD= {(CD|P S ), (CD|P N )} and the one where the breach has occurred to the policyholder who has not claimed premium discount NC= {(NC|P S ), (NC|P N )}. Each of the insurer's information sets has two separate nodes since the insurer does not know the real type of the policyholder when deciding on whether to audit or not. In outcome CD,B,A, the expected utility of the P S is U PS CD,B,A = U (W − p + d − c), where U is a utility function, which we assume to be monotonically increasing and concave, W is the policyholder's initial wealth, p is the premium paid to the insurer, d is the premium discount, and c is the cost of the security investment. We assume the utility function to be concave to model the risk aversion of policyholders as defined in [12]. Note that we assume that W > p > d and W − p + d > c, and both W and p are exogenous to our model. U PS CD,B,A = U PS CD,B,NA = U PS CD,NB = U (W − p + d − c) (1) U PS NC,B,A = U PS NC,B,NA = U PS NC,NB = U (W − p − c) (2) U PN CD,B,A = U (W − p + d − l) (3) U PN CD,B,NA = U PN CD,NB = U (W − p + d) (4) U PN NC,B,A = U PN NC,B,NA = U PN NC,NB = U (W − p)(5) We further assume that the policyholder's goal is to maximize her expected utility. The expected utility of the policyholder is influenced by the possibility of a breach and the insurer's probability to audit. In particular, the expected utility for P S will be the same regardless of the insurer's probability to audit and the breach probability due to indemnification. P N , however, will need to consider these probabilities. In the outcome P S ,CD,B,A the insurer's utility is U I PS,CD,B,A = p − l − d − a, where p is the premium, d is the premium discount offered, l is the loss claimed by the policyholder, and a is the audit cost. In other outcomes, the insurer's utility is as follows: U I PS,CD,B,A = p − l − d − a(6)U I PN,CD,B,A = p − d − a(11) Decision Analysis In this section, we analyze the equilibria of the proposed Cyber Insurance Audit Game (Figure 1), which is a dynamic Bayesian game with incomplete information. The analysis is conducted using the game-theoretic concept of Perfect Bayesian Equilibrium (PBE). This provides insights into the strategic behaviour of the policyholder P concerning discount claims and the insurer I's auditing decision. A PBE, in the context of our game, can be defined by Bayes requirements discussed in [30]: Requirement 1: The player at the time of play must have a belief about which node of the information set has been reached in the game. The beliefs must be calculated using Bayes' rule, whenever possible, ensuring that they are consistent throughout the analysis. Requirement 2: Given these beliefs, a player's strategy must be sequentially rational. A strategy profile is said to be sequentially rational if and only if the action taken by the player with the move is optimal against the strategies played by all other opponents given the player's belief at that information set. Requirement 3: The player must update her beliefs at the PBE to remove any implausible equilibria. These beliefs are determined by Bayes' rule and players' equilibrium strategies. In the event of a security breach, the insurer's decision to audit or not must be based on beliefs regarding the policyholder's types. More specifically, a belief is defined as a probability distribution over the nodes within the insurer's information set, conditioned that the information node has been reached. The insurer has two information sets subjected to whether the policyholder has claimed premium discount or not which are CD= {(CD|P S ), (CD|P N )} and NC= {(NC|P S ), (NC|P N )}. The insurer assigns a belief to each of these information sets. Let µ and λ be the insurer's beliefs where µ = Pr(P S |CD) λ = Pr(P S |NC) That is, for the first information set, the insurer believes with µ and 1 − µ that the premium discount claim is from P S and P N , respectively. Similarly, for the second information set, the insurer believes with λ that P S has not claimed premium discount and believes with 1 − λ that P N has not claimed premium discount. The first requirement of PBE dictates that Bayes' rule should be used to determine beliefs. Thus From the payoffs in Figure 1, it can be clearly seen that CD is always a preferred choice for P S . Whereas, the insurer always gets a better payoff for choosing NA against NC irrespective of policyholder's type. Having defined the necessary concepts, next, we identify the possible PBEs of the game for the following constraints l > a and l > d (15) l > a and l < d (16) l < a and l > d (17) l < a and l < d (18) where the PBEs are strategy profiles and beliefs that satisfies all the three requirements described earlier. Theorem 1. For ϕ > l−a l , l > a and l > d, CIAG has only one pure-strategy PBE (CD,CD),(NA,NA) , in which the policyholder claims premium discount regardless of her type while the insurer does not audit regardless of whether the policyholder claims or not a discount, with µ = ϕ and arbitrary λ ∈ [0, 1]. Proof. The existence of pure-strategy PBE can be verified by examining the strategy profile (CD,CD) and (NA,NA) with constraint in Equation (15). This represents the case where an incident has occurred on the policyholders who have claimed premium discount. a) Belief consistency: Due to information asymmetry and as only one of the insurer's information set is in the equilibrium path, she assigns Pr(CD|P S ) = 1 and Pr(CD|P N ) = 1. Thus, using Bayes' rule in Equation (13) gives µ = ϕ/(ϕ + 1 − ϕ) = ϕ On the other hand, applying Bayes' rule in Equation (14) to λ yields 0/0 which is an indeterminate result. This implies that if the equilibrium is actually played then the off-equilibrium information set NC should not be reached restricting an update to the insurer's belief with Bayes' rule. Due to an indeterminate result, the insurer specifies an arbitrary = ϕ(p − d − l − a) + (1 − ϕ)(p − d − a) = p − ϕl − d − a U NA = ϕ · U I PS,CD,B,NA + (1 − ϕ) · U I PN,CD,B,NA(19)= ϕ(p − d − l) + (1 − ϕ)(p − d − l) = p − d − l(20) The condition for A to be sequentially rational is U A > U NA which gives p − ϕl − d − a > p − d − l ϕ ≤ l − a l = ϕ *(21) Now considering the off-equilibrium information set NC, the insurer always gets a better payoff by choosing NA. Thus, NA is a dominant strategy of the insurer against the off-equilibrium information set NC. The insurer's belief λ remains arbitrary. c) Policyholder's sequentially rational condition given insurer's best response: Knowing the best responses of the insurer i.e. (A,NA) for ϕ ≤ ϕ * and (NA,NA) for ϕ > ϕ * against CD, we derive the best response of the policyholder. For insurer's strategy profile (NA,NA), P S gets a payoff U (W − p + d − c) by choosing CD. If she deviates to NC, she will get a payoff U (W −p−c) which is undesirable. Whereas, P N receives a payoff U (W − p + d) by choosing CD. If she deviates to NC will get a payoff U (W − p) which is also undesirable. Thus, (CD,CD) and (NA,NA) can be verified as a PBE given ϕ > ϕ * and µ = ϕ. Note that the PBE includes the updated beliefs of the insurer implicitly satisfying Requirement 3. From the PBE, we can see that if l > a, l > d and insurer's belief ϕ is greater than the threshold value ϕ * , not auditing a breach is optimal for the insurer and claiming premium discount is optimal for the policyholder regardless of her type. When the insurer's belief ϕ ≤ ϕ * there exist no pure-strategy PBE. As a result, both players will mix up their strategies. We discuss this mixed-strategy PBE below. Note that, in the following, we use the inner tuple (x, 1 − x) to indicate a mixed strategy where the player chooses the first action with probability x and the second action with probability 1 − x. Theorem 2. For ϕ ≤ l−a l , l > a, l > d, CIAG has only one mixed-strategy PBE, in which: • P S will always prefer CD, while P N randomizes between CD and NC with probability δ and 1 − δ, respectively; • the insurer randomizes between A and NA with probability θ and 1 − θ, respectively, against CD, and she always prefer NA against NC, with her beliefs about P S playing CD and NC being µ = ϕ ϕ+(1−ϕ)δ and λ = 0, respectively, where δ = a (1 − ϕ)l θ = U (W − p + d) − U (W − p) β · U (W − p + d) − U (W − p + d − l)(22) Proof. The existence of mixed-strategy PBE is outlined below. a) Belief consistency: Again we apply the Bayes' rule. By assuming that the policyholder sticks to the equilibrium strategy, the insurer can derive that Pr(CD|P S ) = 1, Pr(NC|P S ) = 0, Pr(P S ) = ϕ, Pr(P N ) = 1 − ϕ, Pr(CD|P N ) = δ and Pr(NC|P N ) = 1 − δ. Using Equations (13) and (14) we obtain µ = µ and λ = λ in Equation (2). b) Optimal responses given beliefs and opponent's strategy: Given these beliefs and the mixed strategy of the policyholder, an insurer's optimal strategy would maximize her payoff. The insurer can achieve this by randomizing her actions such that the expected payoffs is equal for all the actions of the policyholder. This is known as the indifference principle in game theory. Thus, the expected utility of P N for choosing CD is U PN CD =β · θ · U PN CD,B,A + (1 − θ) · U PN CD,B,NA + (1 − β) · U PN CD,NB =β · θ · U (W − p + d − l) + β · (1 − θ) · U (W − p + d) + (1 − β) · U (W − p + d) =β · θ · U (W − p + d − l) − U (W − p + d) + U (W − p + d)(23) and for choosing NC, where NA is a dominating strategy of the insurer, is U PN NC = β · U PN NC,B,NA + (1 − β) · U PN NC,NB = β · U (W − p) + (1 − β) · U (W − p) = U (W − p)(24) The indifference principle requires that U P N NC = U P N CD , which gives U (W − p) = β ·θ· U (W − p + d − l) − U (W − p + d) + U (W − p + d) θ = U (W − p + d) − U (W − p) β · U (W − p + d) − U (W − p + d − l) as in Equation (22). Similarly, the policyholder will also mix her strategy with an aim to make the insurer indifferent between choosing A and NA. Thus, U I A = ϕ · U I PS,CD,B,A + (1 − ϕ) · U I PNCD,B,A + U I PNNC,B,A = ϕ (1)(p − d − l − a) + (0)(p − l − a) + (1 − ϕ) δ(p − d − a) + (1 − δ)(p − l − a) = p − l − a − ϕd − δd + δl + ϕδd − ϕδl (25) U NA = ϕ · U I PS,CD,B,NA + (1 − ϕ) · U I PNCD,B,NA + U I PNNC,B,NA = ϕ (1)(p − d − l) + (0)(p − l) + (1 − ϕ) δ(p − d − l) + (1 − δ)(p − l) = p − l − ϕd − δd + ϕδd(26) and U A = U NA gives (22). We conceive all the possible PBEs for CIAG by exhaustively applying this methodology over all combinations of the players' strategy profiles for the four constraints described in Equations (15) to (18). Figure 2 presents the solution space of CIAG. It further shows how the equilibrium strategies of the players depends on the premium discount (d), audit cost (a), and loss (l). p − l − a − ϕd − δd + δl + ϕδd − ϕδl =p − l − ϕd − δd + ϕδd δ = a (1 − ϕ)l as in Equation Model Evaluation Our analysis in Section 4 provides a framework for insurers to determine optimal auditing strategy against policyholders who can misrepresent their security levels to avail premium discounts. This section illustrates the methodology used to obtain values for various parameters of our model and simulation results using these values to determine the best strategy for the insurer. Methodology and Data Collection A diverse set of data sources is needed to study the interaction between insurance pricing, the effectiveness of security controls, and the cost of auditing claims. To this end, we combine the following data sources: a US law requiring insurers to report pricing algorithms [6], analysis of a data set of over 12, 000 cyber events [2], a study of the cost and effectiveness of security controls [31], and a range of informal estimates regarding the cost of an information security audit. The model assumes that nature determines incidents according to a Bernoulli distribution with loss amount l and probability of loss β. Analysis of the data set of 12, 000 cyber incidents reveals data breach incidents occur with a median loss $170K and frequency of around 0.015 for information firms [2], which we use as l and β respectively. We adopt the security control model used in [31]. Both fixed and operational costs are estimated using industry reports, which correspond to c in our model. The effectiveness of a control is represented as a percentage decrease in the size or frequency of losses. For example, operating a firewall ($2,960) is said to reduce losses by 80% [31]leading to a probability of breach after investment (β*) of 0.2β. We downloaded all of the cyber insurance filings in the state of California and discarded off-the-shelf policies that do not change the price based on revenue, industry or security controls. This left 26 different pricing algorithms and corresponding rate tables, the contents of which are described in [6]. Data breach coverage with a $1 million limit was selected because it is the default coverage and it comfortably covers the loss value l for a data breach on SMEs. The premium p and discount d varies based on the insurer. We chose a filing explicitly mentioning discounts for firewalls. For an information firm with $40M of revenue, the premium p is equal to $3,630 and the filings provide a range of discounts up to 25%. The exact value depends on an underwriter's subjective judgment. To comprise this we consider multiple discounts in this range. Estimating the insurer's cost of audit (a) is difficult because they could be conducted by loss adjusters within the firm or contracted out to IT specialists. With the latter in mind, we explored the cost of an information security audit. The cost depends on the depth of the assessment and the expertise of the assessor. However, collating the quoted figures suggests a range from $5, 000 up to $100, 000. Numerical Analysis We simulate the interaction between the cyber insurance policyholder and the insurer based on our game-theoretic model with parameter values described above. First, we compare the expected payoffs of the insurer for different strategic models: 1. the game-theoretic approach (GT) where the insurer chooses an appropriate strategy according to our analysis (refer to Figure 2) and can either audit or not audit; 2. always auditing (A,A) regardless of whether the policyholder has claimed discount or not; 3. always not auditing (NA,NA) regardless of whether the policyholder has claimed discount or not; 4. auditing if the policyholder has claimed discount and not auditing if there is no discount claimed (A,NA); 5. not auditing if the policyholder has claimed discount and auditing if there is no discount claimed (NA,A); 6. auditing half the times regardless of whether the policyholder has claimed discount or not (0.5A,0.5A); 7. auditing half the times when the policyholder has claimed discount and not auditing if there is no discount claimed (0.5A,NA). In the following simulation figures, the insurer's average payoffs with each strategic model are calculated against a policyholder who plays the PBE strategy obtained through our analysis. This policyholder is also the most challenging one for the insurer as it claims for a discount even in the case of non investment. The term "x repetitions of the game" reflects that CIAG is played x number of independent runs for a set of parameter values. From Figures 3a and 3b we observe that the payoff of the insurer when choosing the GT model is always better than rest of the strategic models irrespective of the premium discount. The reason for this is that the model (A,NA), where the insurer audits only policyholders who have claimed the discount, is susceptible to auditing clients who have implemented additional security level bearing the auditing cost as a pure loss. Thus, the larger the number of honest policyholders, the higher the insurer's loss is. Additionally, the insurer's loss as expected increases with the increasing cost of audit. With the (NA,NA) model, the insurer chooses to reimburse the loss without confirming the policyholder's actual security level. Here, the insurer indemnifies even for cases where the policyholder has misrepresented her security level suffering heavy losses. Another non strategic approach would be to randomize over the choice of auditing or not auditing a policyholder who has claimed a premium discount. This strategy, represented by the model ( The GT model presents an optimal mix of (A,NA) and (NA,NA) where the insurer's decision to audit is based on a prior belief regarding the policyholder's security investment. For the median loss of $170k which is greater than both the audit cost and premium discount, the game solution is derived from the upper-right section of the solution space in Figure 2. In particular, when the insurer's belief (φ) regarding the policyholder's security investment is greater than a threshold (ϕ * ), she prefers (NA,NA) i.e, PBE 2: (CD,CD),(NA,NA); ϕ > ϕ * . When the belief is lower than ϕ * , she prefers a mixed approach (PBE 3) by simultaneously relying on (A,NA) and (NA,NA) and choosing whichever is more profitable. The GT model, thus, enables the insurer to take into account a prior belief regarding the policyholder's security investment under the condition of information asymmetry and maximize her payoffs given this belief. The figures further show that regardless of how many times the game has been played model GT performs better than the non-game-theoretic models. With higher audit cost i.e., $100k in Figures 4a and 4b, we observe that the insurer's average payoff with the model (A,NA) decreases drastically confirming it's shortcomings as discussed above. In the case of 1500 independent repetitions for the highest values of audit cost and premium discount, the insurer gains, on average, a higher payoff when choosing GT as opposed to (NA,NA) model. The increased difference in the payoff is equivalent to 98% of the annual premium charged to policyholder. Next, the simulation results are obtained over 100 repetitions with a median loss of $170k against a range of audit cost, premium discount and loss. Note that the models (A,A), (NA,A), and (0.5A,0.5A) are omitted from the figures as they perform worse than others, and for ease of presentation. Figures 5a and 5b show that there is a point of convergence where the strategy largely doesn't matter, but then as the audit cost increases, there is motivation for playing the game-theoretic solution as any other solution is worse. As discount increases, a policyholder might be highly stim-ulated to receive premium discount given that the insurer will grant this without auditing her before an incident occurs. This escalates the possibilities of the policyholder misrepresenting her actual security level. Given this possibility, GT noticeably dominates other strategic models as seen in Figures 5c and 5d. Further, in the case of 100 independent repetitions with the highest values of premium discount and audit cost, deploying GT gives the insurer on average a higher payoff compared to the next best model which is (NA,NA). The increased difference in the payoff is equivalent to 60% of the annual premium charged to the policyholder. Remark 2: For a constant loss, as premium discount and audit cost increase, GT outperforms all other strategic models. Figure 6 shows that there is essentially nothing special about the loss as a contributing factor with low audit cost, but become discriminatory as the audit cost approaches the loss. GT and (NA,NA) performs equally well until this condition, but as the discount increases with the audit cost, GT exceeds (NA,NA). In this case, for 100 independent repetitions, insurers gain on average a higher payoff with model GT, compared to model (NA,NA), the next best. The increased difference is payoff is equivalent to 66% of the annual premium charged to the policyholder. Remark 3: As premium discount, audit cost, and loss increase, GT consistently outperforms all other strategic models. In summary, we have demonstrated how an insurer may use our framework in practice to determine the best auditing strategy against a policyholder. We have illustrated how the insurer's payoff is maximized by strategically choosing to audit or not in the event of a breach. Such strategic behaviour also allows the insurer to maximize her payoff against policyholders who can misrepresent their security levels to avail premium discount. Conclusion Speaking to cyber insurance providers reveals concerns about the discrepancy between the security policies applicants report that they follow, in the application process, and the applicant's compliance with these policies once coverage is in place. To address this, we developed a gametheoretic framework investigating audits as a mechanism to disincentivize misrepresentation of security level by policyholders. Thus far, we know of one instance [10] denying cyber insurance coverage due to non-compliance with the security practices as defined in the insurance contract. Although there could have been denials settled in private, this suggests that most cyber insurance providers follow the never audit strategy. Our analysis derived a gametheoretic strategy that outperforms naïve strategies, such as never audit. By considering the post-incident claims management process, we demonstrated how a cyber insurance market can avoid collapse (contradicting [26]) when the policyholder can fraudulently report their security level. To extend this paper, future work could consider modelling uncertainty about the effectiveness of the implemented security measure. In the current model, the policyholder's type is chosen by Nature according to some probability distribution. It could be extended such that the policyholder maximizes expected payoff by selecting an investment strategy based on the beliefs about her type. This consideration would extend, for example, our analysis to consider the overall utility function of the policyholder, that is considering both the investment and no investment types simultaneously, and maximizing the expected payoff. Another interesting direction is investigating how the potential loss l changes as a function of the security investment. In this case, we will be looking into different types of risk profiles of the policyholders. We could also investigate the trade-off between the additional investment, discount, and residual risk. Finally, a future extension could make investment in security a strategic choice for the policyholder in a multi-round game with a no claims bonus, as our data set describes the size of these discounts. We could also allow belief updates to influence insurer choices on each iteration.
5,971
1908.04867
2968869838
Abstract We introduce a game-theoretic model to investigate the strategic interaction between a cyber insurance policyholder whose premium depends on her self-reported security level and an insurer with the power to audit the security level upon receiving an indemnity claim. Audits can reveal fraudulent (or simply careless) policyholders not following reported security procedures, in which case the insurer can refuse to indemnify the policyholder. However, the insurer has to bear an audit cost even when the policyholders have followed the prescribed security procedures. As audits can be expensive, a key problem insurers face is to devise an auditing strategy to deter policyholders from misrepresenting their security levels to gain a premium discount. This decision-making problem was motivated by conducting interviews with underwriters and reviewing regulatory filings in the US; we discovered that premiums are determined by security posture, yet this is often self-reported and insurers are concerned by whether security procedures are practised as reported by the policyholders. To address this problem, we model this interaction as a Bayesian game of incomplete information and devise optimal auditing strategies for the insurers considering the possibility that the policyholder may misrepresent her security level. To the best of our knowledge, this work is the first theoretical consideration of post-incident claims management in cyber security. Our model captures the trade-off between the incentive to exaggerate security posture during the application process and the possibility of punishment for non-compliance with reported security policies. Simulations demonstrate that common sense techniques are not as efficient at providing effective cyber insurance audit decisions as the ones computed using game theory.
Our contribution to the literature is the first theoretical consideration of post-incident claims management. Our model captures the trade-off between the incentive to exaggerate security posture to receive a premium discount and the possibility of punishment for non-compliance with the reported security policies. We consider misrepresenting security posture a strategic choice for the insured and allow the insurer to respond by auditing claims. Not allowing the insurer to do so leads to market collapse @cite_10 .
{ "abstract": [ "We consider arbitrary risk-averse users, whose costs of improving security are given by an arbitrary convex function. In our model, user probability to incur damage (from an attack) depends on both his own security and network security: thus, security is interdependent. We introduce two user types (normal and malicious), and allow one user type (malicious users) to subvert insurer monitoring, even if insurers perfectly enforce (at zero cost) security levels of normal users. We prove that with malicious users present, equilibrium contract that specifies user security fails to exist. We demonstrate, in a general setting, a failure of cyber-insurers to underwrite contracts conditioning the premiums on security. We consider arbitrary risk-averse users, whose costs of improving security are given by an arbitrary convex function. In our model, user probability to incur damage (from an attack) depends on both his own security and network security: thus, security is interdependent. We introduce two user types (normal and malicious), and allow one user type (malicious users) to subvert insurer monitoring, even if insurers perfectly enforce (at zero cost) security levels of normal users. We prove that with malicious users present, equilibrium contract that specifies user security fails to exist. We demonstrate, in a general setting, a failure of cyber-insurers to underwrite contracts conditioning the premiums on security." ], "cite_N": [ "@cite_10" ], "mid": [ "2031535643" ] }
Post-Incident Audits on Cyber Insurance Discounts
No amount of investment in security eliminates the risk of loss [1]. Driven by the frequency of cyber attacks, riskaverse organizations increasingly transfer residual risk by purchasing cyber insurance. As a result, the cyber-insurance market is predicted to grow to between $7.5 and $20 billion by 2020, as identified in [2]. Similar to other types of insurance, cyber-insurance providers pool the risk from multiple policyholders together and charge a premium to cover the underlying risk. Yet cyber risks like data breaches are qualitatively different from traditional lines like property insurance. For instance, buildings are built once according to building regulations, whereas computer systems continually change as mobile devices blur the network perimeter and software evolves with additional product features and security patches. Adversaries * corresponding author Email addresses: s.panda@surrey.ac.uk (Sakshyam Panda), daniel.woods@cs.ox.ac.uk (Daniel W Woods), alaszka@uh.edu (Aron Laszka), andrew.fielder@imperial.ac.uk (Andrew Fielder), e.panaousis@surrey.ac.uk (Emmanouil Panaousis) update strategies to exploit vulnerabilities emerging from technological flux. Further, the problems of moral hazard and adverse selection become more pressing. Adverse selection results from potential clients being more likely to seek insurance if they face a greater risk of loss. Meanwhile, information asymmetry limits insurers in assessing the applicant's risk. The risk depends on computer systems with many devices in different configurations, users with a range of goals, and idiosyncratic organizational security teams, policies, and employed controls. Collecting information is a costly procedure, let alone assessing and quantifying the corresponding risk. Moral hazard occurs when insureds engage in riskier behaviour in the knowledge that the insurer will indemnify any losses. Even if initial assessment reveals that security policies are in place, it is no guarantee that they will be followed given that "a significant number of security breaches result from employees' failure to comply with security policies" [3]. Technological compliance suffers too, as evidenced by the Equifax breach resulting from not patching a publicly known vulnerability [4]. Insurance companies collect risk information about applicants to address adverse selection. We interviewed 9 underwriters in the UK and found that 8 of them use selfreported application forms; 7 of them use telephone calls with the applicant; 3 of them use external audits; and only one uses on-site audits 1 . This suggests that the application process relies on accurate self-reporting of risk factors. Cyber insurance application forms collect information about questions ranging from generic business topics to questions related to information security controls [5]. Romanosky et al. [6] introduced a data set resulting from a US law requiring insurers to file documents describing their pricing schemes. Pricing algorithms depended on the applicant's industry, revenue, past-claims history, and-most relevant to this paper-the security controls employed by the organization. The insurer collects all this information and sets a price according to the formulas described in [6], reducing the premium when security controls are reported to be in place. This was corroborated by interviews with insurance professionals in Sweden [7]. Surprisingly, individual underwriters determine the size of the discount for security controls on a case-by-case basis, even though this can be as large as 25% of the premium. Moral hazard is generally addressed by including terms in the policy that insureds must follow for their coverage to be valid. An early study found that coverage was excluded for a "failure to take reasonable steps to maintain and upgrade security" [8]. A study from 2017 found few exclusions prescribing security procedures but the majority of policies contained exclusions for "dishonest acts" [6]. One such dishonest act is violating the principle of up-most good faith requiring insureds to not intentionally deceive the company offering the insurance [9]. This principle and the corresponding exclusion mitigates moral hazard, which might otherwise drive honest firms to de-prioritize compliance with security procedures. Further, it imposes a cost on fraudulent organizations claiming that entirely fictional security products are in place to receive a lower premium. For example, one insurer refused to pay out on a cyber policy because "security patches were no longer even available, much less implemented" [10] despite the application form reporting otherwise. We do not consider the legality of this case, but include it as evidence that insurers conduct audits to establish whether there are grounds for refusing coverage. Further, insurers offer discounts for insureds based on security posture and often rely on self-reports that security controls are in place. Interviewing insurers revealed concerns about whether security policies were being complied with in reality. Besides, larger premium discounts increase the incentive to misrepresent security levels potentially necessitating a higher frequency of investigation 1 Note that these are mutually inclusive events. which is uneconomical for insurers. To explore how often should insurers audit cyber insurance claims, we develop a game-theoretic model that takes into account relevant parameters from pricing data collected by analyzing 26 cyber insurance pricing schemes filed in California and identifies different optimal auditing strategies for insurers. Our analytical approach relies on Perfect Bayesian Equilibrium (PBE). We complement our analysis with simulation results with parameter values from the collected data. We further make "common sense" assumptions regarding auditing strategies and show that in general, insurers are better-off with the game-theoretic strategies. The results will be of interest to policymakers in the United States and the European Union, who believe cyber insurance can improve security levels by offering premium discounts [11]. The remainder of this paper is organized as follows. Section 2 identifies existing approaches to modeling the cyber insurance market. We introduce our game-theoretic model in Section 3 and present the analysis in Section 4. Section 5 details the our methodology for data collection which instantiate our simulation results. Finally, we end with concluding remarks in Section 6. Model We model the interaction between the policyholder P and insurer I as a one-shot dynamic game called the Cyber Insurance Audit Game (CIAG), which is represented in Figure 1. Each decision node of the tree represents a state where the player with the move has to choose an action. The leaf nodes present the payoffs of the players for the sequence of chosen actions. The payoffs are represented in the format x y , where x and y are the payoffs of P and I, respectively. Table 1 presents the list of symbols used in our model. Note that the initial wealth of the policyholder (W ) and the premium for insurance coverage (p) are omitted from the tree for ease of presentation. We assume that the policyholder does not make a decision regarding its security investment in our model, because that decision has been made before seeking insurance. Hence, a particular applicant has a certain fixed type (with respect to security), but the insurer does not know the type of an applicant due to information asymmetry. We can model the insurer's uncertainty by assuming that it encounters certain types of applicants with certain probabilities. The type of the policyholder is modeled as an outcome of a random event, that is, nature (N) decides the policyholder's type with respect to additional security investments, i.e., P S represents one with additional security investments and P N one without. Further, nature also decides whether a security incident occurs for each policyholder, represented as B (breach) and NB (no breach). The probability of an incident depends on the type of the policyholder. Nature moves first by randomly choosing the policyholder's type according to a known a priori distribution: P S with probability Pr(P S ) = ϕ and P N with probability Pr(P N ) = 1 − ϕ, ϕ ∈ [0, 1]. The type is private to a policyholder and the insurer knows only the probability distribution over the different types. Hence, the game is of incomplete information. Regardless of the types, the policyholder's actions are CD (claim premium discount) and NC (no discount claim). Nature then decides the occurrence of the breach on a policyholder, followed by the insurer's decision to audit (A) or not audit (NA) only in the event of a breach. We assume that in CIAG, an audit investigates the misrepresentation of the cyber security investment and the claim for receiving a premium discount. In particular, it investigates whether the policyholder had indeed invested in cyber security countermeasures before claiming this discount. Our model does not assume that there is a particular type of audit. Having described the players and actions, in the following we present the interaction between P and I. First, P has signed up for a cyber insurance contract by paying a pre- mium p. The type of P is decided by the nature based on an additional security investment. We assume that this investment equals c. This investment will decrease the probability of P being compromised from β to β * . N P S N I d − c −l − d − a A d − c −l − d NA B d − c −d NB CD N I −c −l − a A −c −l NA B −c 0 NB NC ϕ P N N I d − l −d − a A d −l − d NA B d −d NB CD N I 0 −l − a At the same time the investment will enable P to claim a premium discount d. We assume that I offers d without performing any audit since investigating at this point would mean that I would have to audit policyholders who might never file an indemnity claim, thereby incurring avoidable losses. We further assume that if P decides to claim a discount without making the security investment, she will still receive d but risks having a future claim denied after an audit. After an incident, where P suffers loss l, insurer I has to decide whether to conduct an audit (e.g. forensics) to investigate details of the incident including the security level of P at the time of breach. We assume that this audit costs a to the insurer. This audit will result in: Case 1: confirming that P has indeed invested in security as claimed, in which case I will pay the indemnity. We assume full coverage so the indemnity payment equals l. Case 2: discovering that P has misrepresented her security level, I refuses to pay the indemnity and P has to bear the incident cost l. We assume that this case falls within the contract period during which P is locked-in by the contract. We define misrepresentation as when P is fraudulent or simply careless in maintaining the prescribed security level in the insurance contract and reports a fabricated security level to get the premium discount. In Figure 1, some decision nodes of I are connected through dotted lines indicating that the I cannot distinguish between the connected nodes due to unknown P type. These sets of decision nodes define the information sets of the insurer. An information set is a set of one or more decision nodes of a player that determines the possible subse-quent moves of the player conditioning on what the player has observed so far in the game. The insurer also has two information sets, one where the breach has occurred to the policyholder who has claimed premium discount CD= {(CD|P S ), (CD|P N )} and the one where the breach has occurred to the policyholder who has not claimed premium discount NC= {(NC|P S ), (NC|P N )}. Each of the insurer's information sets has two separate nodes since the insurer does not know the real type of the policyholder when deciding on whether to audit or not. In outcome CD,B,A, the expected utility of the P S is U PS CD,B,A = U (W − p + d − c), where U is a utility function, which we assume to be monotonically increasing and concave, W is the policyholder's initial wealth, p is the premium paid to the insurer, d is the premium discount, and c is the cost of the security investment. We assume the utility function to be concave to model the risk aversion of policyholders as defined in [12]. Note that we assume that W > p > d and W − p + d > c, and both W and p are exogenous to our model. U PS CD,B,A = U PS CD,B,NA = U PS CD,NB = U (W − p + d − c) (1) U PS NC,B,A = U PS NC,B,NA = U PS NC,NB = U (W − p − c) (2) U PN CD,B,A = U (W − p + d − l) (3) U PN CD,B,NA = U PN CD,NB = U (W − p + d) (4) U PN NC,B,A = U PN NC,B,NA = U PN NC,NB = U (W − p)(5) We further assume that the policyholder's goal is to maximize her expected utility. The expected utility of the policyholder is influenced by the possibility of a breach and the insurer's probability to audit. In particular, the expected utility for P S will be the same regardless of the insurer's probability to audit and the breach probability due to indemnification. P N , however, will need to consider these probabilities. In the outcome P S ,CD,B,A the insurer's utility is U I PS,CD,B,A = p − l − d − a, where p is the premium, d is the premium discount offered, l is the loss claimed by the policyholder, and a is the audit cost. In other outcomes, the insurer's utility is as follows: U I PS,CD,B,A = p − l − d − a(6)U I PN,CD,B,A = p − d − a(11) Decision Analysis In this section, we analyze the equilibria of the proposed Cyber Insurance Audit Game (Figure 1), which is a dynamic Bayesian game with incomplete information. The analysis is conducted using the game-theoretic concept of Perfect Bayesian Equilibrium (PBE). This provides insights into the strategic behaviour of the policyholder P concerning discount claims and the insurer I's auditing decision. A PBE, in the context of our game, can be defined by Bayes requirements discussed in [30]: Requirement 1: The player at the time of play must have a belief about which node of the information set has been reached in the game. The beliefs must be calculated using Bayes' rule, whenever possible, ensuring that they are consistent throughout the analysis. Requirement 2: Given these beliefs, a player's strategy must be sequentially rational. A strategy profile is said to be sequentially rational if and only if the action taken by the player with the move is optimal against the strategies played by all other opponents given the player's belief at that information set. Requirement 3: The player must update her beliefs at the PBE to remove any implausible equilibria. These beliefs are determined by Bayes' rule and players' equilibrium strategies. In the event of a security breach, the insurer's decision to audit or not must be based on beliefs regarding the policyholder's types. More specifically, a belief is defined as a probability distribution over the nodes within the insurer's information set, conditioned that the information node has been reached. The insurer has two information sets subjected to whether the policyholder has claimed premium discount or not which are CD= {(CD|P S ), (CD|P N )} and NC= {(NC|P S ), (NC|P N )}. The insurer assigns a belief to each of these information sets. Let µ and λ be the insurer's beliefs where µ = Pr(P S |CD) λ = Pr(P S |NC) That is, for the first information set, the insurer believes with µ and 1 − µ that the premium discount claim is from P S and P N , respectively. Similarly, for the second information set, the insurer believes with λ that P S has not claimed premium discount and believes with 1 − λ that P N has not claimed premium discount. The first requirement of PBE dictates that Bayes' rule should be used to determine beliefs. Thus From the payoffs in Figure 1, it can be clearly seen that CD is always a preferred choice for P S . Whereas, the insurer always gets a better payoff for choosing NA against NC irrespective of policyholder's type. Having defined the necessary concepts, next, we identify the possible PBEs of the game for the following constraints l > a and l > d (15) l > a and l < d (16) l < a and l > d (17) l < a and l < d (18) where the PBEs are strategy profiles and beliefs that satisfies all the three requirements described earlier. Theorem 1. For ϕ > l−a l , l > a and l > d, CIAG has only one pure-strategy PBE (CD,CD),(NA,NA) , in which the policyholder claims premium discount regardless of her type while the insurer does not audit regardless of whether the policyholder claims or not a discount, with µ = ϕ and arbitrary λ ∈ [0, 1]. Proof. The existence of pure-strategy PBE can be verified by examining the strategy profile (CD,CD) and (NA,NA) with constraint in Equation (15). This represents the case where an incident has occurred on the policyholders who have claimed premium discount. a) Belief consistency: Due to information asymmetry and as only one of the insurer's information set is in the equilibrium path, she assigns Pr(CD|P S ) = 1 and Pr(CD|P N ) = 1. Thus, using Bayes' rule in Equation (13) gives µ = ϕ/(ϕ + 1 − ϕ) = ϕ On the other hand, applying Bayes' rule in Equation (14) to λ yields 0/0 which is an indeterminate result. This implies that if the equilibrium is actually played then the off-equilibrium information set NC should not be reached restricting an update to the insurer's belief with Bayes' rule. Due to an indeterminate result, the insurer specifies an arbitrary = ϕ(p − d − l − a) + (1 − ϕ)(p − d − a) = p − ϕl − d − a U NA = ϕ · U I PS,CD,B,NA + (1 − ϕ) · U I PN,CD,B,NA(19)= ϕ(p − d − l) + (1 − ϕ)(p − d − l) = p − d − l(20) The condition for A to be sequentially rational is U A > U NA which gives p − ϕl − d − a > p − d − l ϕ ≤ l − a l = ϕ *(21) Now considering the off-equilibrium information set NC, the insurer always gets a better payoff by choosing NA. Thus, NA is a dominant strategy of the insurer against the off-equilibrium information set NC. The insurer's belief λ remains arbitrary. c) Policyholder's sequentially rational condition given insurer's best response: Knowing the best responses of the insurer i.e. (A,NA) for ϕ ≤ ϕ * and (NA,NA) for ϕ > ϕ * against CD, we derive the best response of the policyholder. For insurer's strategy profile (NA,NA), P S gets a payoff U (W − p + d − c) by choosing CD. If she deviates to NC, she will get a payoff U (W −p−c) which is undesirable. Whereas, P N receives a payoff U (W − p + d) by choosing CD. If she deviates to NC will get a payoff U (W − p) which is also undesirable. Thus, (CD,CD) and (NA,NA) can be verified as a PBE given ϕ > ϕ * and µ = ϕ. Note that the PBE includes the updated beliefs of the insurer implicitly satisfying Requirement 3. From the PBE, we can see that if l > a, l > d and insurer's belief ϕ is greater than the threshold value ϕ * , not auditing a breach is optimal for the insurer and claiming premium discount is optimal for the policyholder regardless of her type. When the insurer's belief ϕ ≤ ϕ * there exist no pure-strategy PBE. As a result, both players will mix up their strategies. We discuss this mixed-strategy PBE below. Note that, in the following, we use the inner tuple (x, 1 − x) to indicate a mixed strategy where the player chooses the first action with probability x and the second action with probability 1 − x. Theorem 2. For ϕ ≤ l−a l , l > a, l > d, CIAG has only one mixed-strategy PBE, in which: • P S will always prefer CD, while P N randomizes between CD and NC with probability δ and 1 − δ, respectively; • the insurer randomizes between A and NA with probability θ and 1 − θ, respectively, against CD, and she always prefer NA against NC, with her beliefs about P S playing CD and NC being µ = ϕ ϕ+(1−ϕ)δ and λ = 0, respectively, where δ = a (1 − ϕ)l θ = U (W − p + d) − U (W − p) β · U (W − p + d) − U (W − p + d − l)(22) Proof. The existence of mixed-strategy PBE is outlined below. a) Belief consistency: Again we apply the Bayes' rule. By assuming that the policyholder sticks to the equilibrium strategy, the insurer can derive that Pr(CD|P S ) = 1, Pr(NC|P S ) = 0, Pr(P S ) = ϕ, Pr(P N ) = 1 − ϕ, Pr(CD|P N ) = δ and Pr(NC|P N ) = 1 − δ. Using Equations (13) and (14) we obtain µ = µ and λ = λ in Equation (2). b) Optimal responses given beliefs and opponent's strategy: Given these beliefs and the mixed strategy of the policyholder, an insurer's optimal strategy would maximize her payoff. The insurer can achieve this by randomizing her actions such that the expected payoffs is equal for all the actions of the policyholder. This is known as the indifference principle in game theory. Thus, the expected utility of P N for choosing CD is U PN CD =β · θ · U PN CD,B,A + (1 − θ) · U PN CD,B,NA + (1 − β) · U PN CD,NB =β · θ · U (W − p + d − l) + β · (1 − θ) · U (W − p + d) + (1 − β) · U (W − p + d) =β · θ · U (W − p + d − l) − U (W − p + d) + U (W − p + d)(23) and for choosing NC, where NA is a dominating strategy of the insurer, is U PN NC = β · U PN NC,B,NA + (1 − β) · U PN NC,NB = β · U (W − p) + (1 − β) · U (W − p) = U (W − p)(24) The indifference principle requires that U P N NC = U P N CD , which gives U (W − p) = β ·θ· U (W − p + d − l) − U (W − p + d) + U (W − p + d) θ = U (W − p + d) − U (W − p) β · U (W − p + d) − U (W − p + d − l) as in Equation (22). Similarly, the policyholder will also mix her strategy with an aim to make the insurer indifferent between choosing A and NA. Thus, U I A = ϕ · U I PS,CD,B,A + (1 − ϕ) · U I PNCD,B,A + U I PNNC,B,A = ϕ (1)(p − d − l − a) + (0)(p − l − a) + (1 − ϕ) δ(p − d − a) + (1 − δ)(p − l − a) = p − l − a − ϕd − δd + δl + ϕδd − ϕδl (25) U NA = ϕ · U I PS,CD,B,NA + (1 − ϕ) · U I PNCD,B,NA + U I PNNC,B,NA = ϕ (1)(p − d − l) + (0)(p − l) + (1 − ϕ) δ(p − d − l) + (1 − δ)(p − l) = p − l − ϕd − δd + ϕδd(26) and U A = U NA gives (22). We conceive all the possible PBEs for CIAG by exhaustively applying this methodology over all combinations of the players' strategy profiles for the four constraints described in Equations (15) to (18). Figure 2 presents the solution space of CIAG. It further shows how the equilibrium strategies of the players depends on the premium discount (d), audit cost (a), and loss (l). p − l − a − ϕd − δd + δl + ϕδd − ϕδl =p − l − ϕd − δd + ϕδd δ = a (1 − ϕ)l as in Equation Model Evaluation Our analysis in Section 4 provides a framework for insurers to determine optimal auditing strategy against policyholders who can misrepresent their security levels to avail premium discounts. This section illustrates the methodology used to obtain values for various parameters of our model and simulation results using these values to determine the best strategy for the insurer. Methodology and Data Collection A diverse set of data sources is needed to study the interaction between insurance pricing, the effectiveness of security controls, and the cost of auditing claims. To this end, we combine the following data sources: a US law requiring insurers to report pricing algorithms [6], analysis of a data set of over 12, 000 cyber events [2], a study of the cost and effectiveness of security controls [31], and a range of informal estimates regarding the cost of an information security audit. The model assumes that nature determines incidents according to a Bernoulli distribution with loss amount l and probability of loss β. Analysis of the data set of 12, 000 cyber incidents reveals data breach incidents occur with a median loss $170K and frequency of around 0.015 for information firms [2], which we use as l and β respectively. We adopt the security control model used in [31]. Both fixed and operational costs are estimated using industry reports, which correspond to c in our model. The effectiveness of a control is represented as a percentage decrease in the size or frequency of losses. For example, operating a firewall ($2,960) is said to reduce losses by 80% [31]leading to a probability of breach after investment (β*) of 0.2β. We downloaded all of the cyber insurance filings in the state of California and discarded off-the-shelf policies that do not change the price based on revenue, industry or security controls. This left 26 different pricing algorithms and corresponding rate tables, the contents of which are described in [6]. Data breach coverage with a $1 million limit was selected because it is the default coverage and it comfortably covers the loss value l for a data breach on SMEs. The premium p and discount d varies based on the insurer. We chose a filing explicitly mentioning discounts for firewalls. For an information firm with $40M of revenue, the premium p is equal to $3,630 and the filings provide a range of discounts up to 25%. The exact value depends on an underwriter's subjective judgment. To comprise this we consider multiple discounts in this range. Estimating the insurer's cost of audit (a) is difficult because they could be conducted by loss adjusters within the firm or contracted out to IT specialists. With the latter in mind, we explored the cost of an information security audit. The cost depends on the depth of the assessment and the expertise of the assessor. However, collating the quoted figures suggests a range from $5, 000 up to $100, 000. Numerical Analysis We simulate the interaction between the cyber insurance policyholder and the insurer based on our game-theoretic model with parameter values described above. First, we compare the expected payoffs of the insurer for different strategic models: 1. the game-theoretic approach (GT) where the insurer chooses an appropriate strategy according to our analysis (refer to Figure 2) and can either audit or not audit; 2. always auditing (A,A) regardless of whether the policyholder has claimed discount or not; 3. always not auditing (NA,NA) regardless of whether the policyholder has claimed discount or not; 4. auditing if the policyholder has claimed discount and not auditing if there is no discount claimed (A,NA); 5. not auditing if the policyholder has claimed discount and auditing if there is no discount claimed (NA,A); 6. auditing half the times regardless of whether the policyholder has claimed discount or not (0.5A,0.5A); 7. auditing half the times when the policyholder has claimed discount and not auditing if there is no discount claimed (0.5A,NA). In the following simulation figures, the insurer's average payoffs with each strategic model are calculated against a policyholder who plays the PBE strategy obtained through our analysis. This policyholder is also the most challenging one for the insurer as it claims for a discount even in the case of non investment. The term "x repetitions of the game" reflects that CIAG is played x number of independent runs for a set of parameter values. From Figures 3a and 3b we observe that the payoff of the insurer when choosing the GT model is always better than rest of the strategic models irrespective of the premium discount. The reason for this is that the model (A,NA), where the insurer audits only policyholders who have claimed the discount, is susceptible to auditing clients who have implemented additional security level bearing the auditing cost as a pure loss. Thus, the larger the number of honest policyholders, the higher the insurer's loss is. Additionally, the insurer's loss as expected increases with the increasing cost of audit. With the (NA,NA) model, the insurer chooses to reimburse the loss without confirming the policyholder's actual security level. Here, the insurer indemnifies even for cases where the policyholder has misrepresented her security level suffering heavy losses. Another non strategic approach would be to randomize over the choice of auditing or not auditing a policyholder who has claimed a premium discount. This strategy, represented by the model ( The GT model presents an optimal mix of (A,NA) and (NA,NA) where the insurer's decision to audit is based on a prior belief regarding the policyholder's security investment. For the median loss of $170k which is greater than both the audit cost and premium discount, the game solution is derived from the upper-right section of the solution space in Figure 2. In particular, when the insurer's belief (φ) regarding the policyholder's security investment is greater than a threshold (ϕ * ), she prefers (NA,NA) i.e, PBE 2: (CD,CD),(NA,NA); ϕ > ϕ * . When the belief is lower than ϕ * , she prefers a mixed approach (PBE 3) by simultaneously relying on (A,NA) and (NA,NA) and choosing whichever is more profitable. The GT model, thus, enables the insurer to take into account a prior belief regarding the policyholder's security investment under the condition of information asymmetry and maximize her payoffs given this belief. The figures further show that regardless of how many times the game has been played model GT performs better than the non-game-theoretic models. With higher audit cost i.e., $100k in Figures 4a and 4b, we observe that the insurer's average payoff with the model (A,NA) decreases drastically confirming it's shortcomings as discussed above. In the case of 1500 independent repetitions for the highest values of audit cost and premium discount, the insurer gains, on average, a higher payoff when choosing GT as opposed to (NA,NA) model. The increased difference in the payoff is equivalent to 98% of the annual premium charged to policyholder. Next, the simulation results are obtained over 100 repetitions with a median loss of $170k against a range of audit cost, premium discount and loss. Note that the models (A,A), (NA,A), and (0.5A,0.5A) are omitted from the figures as they perform worse than others, and for ease of presentation. Figures 5a and 5b show that there is a point of convergence where the strategy largely doesn't matter, but then as the audit cost increases, there is motivation for playing the game-theoretic solution as any other solution is worse. As discount increases, a policyholder might be highly stim-ulated to receive premium discount given that the insurer will grant this without auditing her before an incident occurs. This escalates the possibilities of the policyholder misrepresenting her actual security level. Given this possibility, GT noticeably dominates other strategic models as seen in Figures 5c and 5d. Further, in the case of 100 independent repetitions with the highest values of premium discount and audit cost, deploying GT gives the insurer on average a higher payoff compared to the next best model which is (NA,NA). The increased difference in the payoff is equivalent to 60% of the annual premium charged to the policyholder. Remark 2: For a constant loss, as premium discount and audit cost increase, GT outperforms all other strategic models. Figure 6 shows that there is essentially nothing special about the loss as a contributing factor with low audit cost, but become discriminatory as the audit cost approaches the loss. GT and (NA,NA) performs equally well until this condition, but as the discount increases with the audit cost, GT exceeds (NA,NA). In this case, for 100 independent repetitions, insurers gain on average a higher payoff with model GT, compared to model (NA,NA), the next best. The increased difference is payoff is equivalent to 66% of the annual premium charged to the policyholder. Remark 3: As premium discount, audit cost, and loss increase, GT consistently outperforms all other strategic models. In summary, we have demonstrated how an insurer may use our framework in practice to determine the best auditing strategy against a policyholder. We have illustrated how the insurer's payoff is maximized by strategically choosing to audit or not in the event of a breach. Such strategic behaviour also allows the insurer to maximize her payoff against policyholders who can misrepresent their security levels to avail premium discount. Conclusion Speaking to cyber insurance providers reveals concerns about the discrepancy between the security policies applicants report that they follow, in the application process, and the applicant's compliance with these policies once coverage is in place. To address this, we developed a gametheoretic framework investigating audits as a mechanism to disincentivize misrepresentation of security level by policyholders. Thus far, we know of one instance [10] denying cyber insurance coverage due to non-compliance with the security practices as defined in the insurance contract. Although there could have been denials settled in private, this suggests that most cyber insurance providers follow the never audit strategy. Our analysis derived a gametheoretic strategy that outperforms naïve strategies, such as never audit. By considering the post-incident claims management process, we demonstrated how a cyber insurance market can avoid collapse (contradicting [26]) when the policyholder can fraudulently report their security level. To extend this paper, future work could consider modelling uncertainty about the effectiveness of the implemented security measure. In the current model, the policyholder's type is chosen by Nature according to some probability distribution. It could be extended such that the policyholder maximizes expected payoff by selecting an investment strategy based on the beliefs about her type. This consideration would extend, for example, our analysis to consider the overall utility function of the policyholder, that is considering both the investment and no investment types simultaneously, and maximizing the expected payoff. Another interesting direction is investigating how the potential loss l changes as a function of the security investment. In this case, we will be looking into different types of risk profiles of the policyholders. We could also investigate the trade-off between the additional investment, discount, and residual risk. Finally, a future extension could make investment in security a strategic choice for the policyholder in a multi-round game with a no claims bonus, as our data set describes the size of these discounts. We could also allow belief updates to influence insurer choices on each iteration.
5,971
1908.04686
2967058823
We show how to build several data structures of central importance to string processing, taking as input the Burrows-Wheeler transform (BWT) and using small extra working space. Let @math be the text length and @math be the alphabet size. We first provide two algorithms that enumerate all LCP values and suffix tree intervals in @math time using just @math bits of working space on top of the input BWT. Using these algorithms as building blocks, for any parameter @math we show how to build the PLCP bitvector and the balanced parentheses representation of the suffix tree topology in @math time using at most @math bits of working space on top of the input BWT and the output. In particular, this implies that we can build a compressed suffix tree from the BWT using just succinct working space (i.e. @math bits) and any time in @math . This improves the previous most space-efficient algorithms, which worked in @math bits and @math time. We also consider the problem of merging BWTs of string collections, and provide a solution running in @math time and using just @math bits of working space. An efficient implementation of our LCP construction and BWT merge algorithms use (in RAM) as few as @math bits on top of a packed representation of the input output and process data as fast as @math megabases per second.
As far as the CSA is concerned, this component can be easily built from the BWT using small space as it is formed (in its simplest design) by just a BWT with rank select functionality enhanced with a suffix array sampling, see also @cite_21 .
{ "abstract": [ "We show that the compressed suffix array and the compressed suffix tree for a string of length n over an integer alphabet of size σ ≤ n can both be built in O(n) (randomized) time using only O(n log σ) bits of working space. The previously fastest construction algorithms that used O(n log σ) bits of space took times O(n log log σ) and O(n loge n) respectively (where e is any positive constant smaller than 1)." ], "cite_N": [ "@cite_21" ], "mid": [ "1988110322" ] }
0
1908.04686
2967058823
We show how to build several data structures of central importance to string processing, taking as input the Burrows-Wheeler transform (BWT) and using small extra working space. Let @math be the text length and @math be the alphabet size. We first provide two algorithms that enumerate all LCP values and suffix tree intervals in @math time using just @math bits of working space on top of the input BWT. Using these algorithms as building blocks, for any parameter @math we show how to build the PLCP bitvector and the balanced parentheses representation of the suffix tree topology in @math time using at most @math bits of working space on top of the input BWT and the output. In particular, this implies that we can build a compressed suffix tree from the BWT using just succinct working space (i.e. @math bits) and any time in @math . This improves the previous most space-efficient algorithms, which worked in @math bits and @math time. We also consider the problem of merging BWTs of string collections, and provide a solution running in @math time and using just @math bits of working space. An efficient implementation of our LCP construction and BWT merge algorithms use (in RAM) as few as @math bits on top of a packed representation of the input output and process data as fast as @math megabases per second.
We are aware of only one work building the LCP array in small space from the BWT: @cite_38 show how to build the LCP array in @math time and @math bits of working space on top of the input BWT and the output. Other works @cite_15 @cite_21 show how to build the LCP array directly from the text in @math time and @math bits of space (compact).
{ "abstract": [ "Many sequence analysis tasks can be accomplished with a suffix array, and several of them additionally need the longest common prefix array. In large scale applications, suffix arrays are being replaced with full-text indexes that are based on the Burrows-Wheeler transform. In this paper, we present the first algorithm that computes the longest common prefix array directly on the wavelet tree of the Burrows-Wheeler transformed string. It runs in linear time and a practical implementation requires approximately 2.2 bytes per character.", "We show that the compressed suffix array and the compressed suffix tree of a string T can be built in O(n) deterministic time using O(n log σ) bits of space, where n is the string length and σ is the alphabet size. Previously described deterministic algorithms either run in time that depends on the alphabet size or need ω(n log σ) bits of working space. Our result has immediate applications to other problems, such as yielding the first deterministic linear-time LZ77 and LZ78 parsing algorithms that use O(n log σ) bits.", "We show that the compressed suffix array and the compressed suffix tree for a string of length n over an integer alphabet of size σ ≤ n can both be built in O(n) (randomized) time using only O(n log σ) bits of working space. The previously fastest construction algorithms that used O(n log σ) bits of space took times O(n log log σ) and O(n loge n) respectively (where e is any positive constant smaller than 1)." ], "cite_N": [ "@cite_38", "@cite_15", "@cite_21" ], "mid": [ "2042175004", "2963440221", "1988110322" ] }
0
1908.04686
2967058823
We show how to build several data structures of central importance to string processing, taking as input the Burrows-Wheeler transform (BWT) and using small extra working space. Let @math be the text length and @math be the alphabet size. We first provide two algorithms that enumerate all LCP values and suffix tree intervals in @math time using just @math bits of working space on top of the input BWT. Using these algorithms as building blocks, for any parameter @math we show how to build the PLCP bitvector and the balanced parentheses representation of the suffix tree topology in @math time using at most @math bits of working space on top of the input BWT and the output. In particular, this implies that we can build a compressed suffix tree from the BWT using just succinct working space (i.e. @math bits) and any time in @math . This improves the previous most space-efficient algorithms, which worked in @math bits and @math time. We also consider the problem of merging BWTs of string collections, and provide a solution running in @math time and using just @math bits of working space. An efficient implementation of our LCP construction and BWT merge algorithms use (in RAM) as few as @math bits on top of a packed representation of the input output and process data as fast as @math megabases per second.
K "a rkk "a @cite_26 show that the PLCP bitvector can be built in @math time using @math bits of working space on top of the text, the suffix array, and the output PLCP. Kasai at al.'s lemma also stands at the basis of a more space-efficient algorithm from V " a lim " a @cite_18 , which computes the PLCP from a CSA in @math time using constant working space on top of the CSA and the output. Belazzougui @cite_21 recently presented an algorithm for building the PLCP bitvector from the text in optimal @math time and compact space ( @math bits).
{ "abstract": [ "Suffix tree is one of the most important data structures in string algorithms and biological sequence analysis. Unfortunately, when it comes to implementing those algorithms and applying them to real genomic sequences, often the main memory size becomes the bottleneck. This is easily explained by the fact that while a DNA sequence of length n from alphabet Σ e A,C,G,T can be stored in n log vΣv e 2n bits, its suffix tree occupiesO(n log n) bits. In practice, the size difference easily reaches factor 50. We report on an implementation of the compressed suffix tree very recently proposed by Sadakane (2007). The compressed suffix tree occupies space proportional to the text size, that is, O(n log vΣv) bits, and supports all typical suffix tree operations with at most log n factor slowdown. Our experiments show that, for example, on a 10 MB DNA sequence, the compressed suffix tree takes 10p of the space of the normal suffix tree. At the same time, a representative algorithm is slowed down by factor 30. Our implementation follows the original proposal in spirit, but some internal parts are tailored toward practical implementation. Our construction algorithm has time requirement O(n log n log vΣv) and uses closely the same space as the final structure while constructing it: on the 10MB DNA sequence, the maximum space usage during construction is only 1.5 times the final product size. As by-products, we develop a method to create Succinct Suffix Array directly from Burrows-Wheeler transform and a space-efficient version of the suffixes-insertion algorithm to build balanced parentheses representation of suffix tree from LCP information.", "The longest-common-prefix (LCP) array is an adjunct to the suffix array that allows many string processing problems to be solved in optimal time and space. Its construction is a bottleneck in practice, taking almost as long as suffix array construction. In this paper, we describe algorithms for constructing the permuted LCP (PLCP) array in which the values appear in position order rather than lexicographical order. Using the PLCP array, we can either construct or simulate the LCP array. We obtain a family of algorithms including the fastest known LCP construction algorithm and some extremely space efficient algorithms. We also prove a new combinatorial property of the LCP values.", "We show that the compressed suffix array and the compressed suffix tree for a string of length n over an integer alphabet of size σ ≤ n can both be built in O(n) (randomized) time using only O(n log σ) bits of working space. The previously fastest construction algorithms that used O(n log σ) bits of space took times O(n log log σ) and O(n loge n) respectively (where e is any positive constant smaller than 1)." ], "cite_N": [ "@cite_18", "@cite_26", "@cite_21" ], "mid": [ "2146346985", "2143469992", "1988110322" ] }
0
1908.04686
2967058823
We show how to build several data structures of central importance to string processing, taking as input the Burrows-Wheeler transform (BWT) and using small extra working space. Let @math be the text length and @math be the alphabet size. We first provide two algorithms that enumerate all LCP values and suffix tree intervals in @math time using just @math bits of working space on top of the input BWT. Using these algorithms as building blocks, for any parameter @math we show how to build the PLCP bitvector and the balanced parentheses representation of the suffix tree topology in @math time using at most @math bits of working space on top of the input BWT and the output. In particular, this implies that we can build a compressed suffix tree from the BWT using just succinct working space (i.e. @math bits) and any time in @math . This improves the previous most space-efficient algorithms, which worked in @math bits and @math time. We also consider the problem of merging BWTs of string collections, and provide a solution running in @math time and using just @math bits of working space. An efficient implementation of our LCP construction and BWT merge algorithms use (in RAM) as few as @math bits on top of a packed representation of the input output and process data as fast as @math megabases per second.
The remaining component required to build a compressed suffix tree (in the version described by Sadakane @cite_10 ) is the suffix tree topology, represented either in BPS @cite_19 (balanced parentheses) or DFUDS @cite_41 (depth first unary degree sequence), using @math bits. As far as the BPS representation is concerned, @cite_8 show how to build it from a CSA in @math time and compact space for any constant @math . Belazzougui @cite_21 improves this running time to the optimal @math , still working within compact space. V " a lim " a @cite_18 describe a linear-time algorithm that improves the space to @math bits on top of the LCP array (which however needs to be represented in plain form), while @cite_33 show how to build the DFUDS representation of the suffix tree topology in @math time using @math bits of working space on top of a structure supporting access to LCP array values in @math time.
{ "abstract": [ "Suffix tree is one of the most important data structures in string algorithms and biological sequence analysis. Unfortunately, when it comes to implementing those algorithms and applying them to real genomic sequences, often the main memory size becomes the bottleneck. This is easily explained by the fact that while a DNA sequence of length n from alphabet Σ e A,C,G,T can be stored in n log vΣv e 2n bits, its suffix tree occupiesO(n log n) bits. In practice, the size difference easily reaches factor 50. We report on an implementation of the compressed suffix tree very recently proposed by Sadakane (2007). The compressed suffix tree occupies space proportional to the text size, that is, O(n log vΣv) bits, and supports all typical suffix tree operations with at most log n factor slowdown. Our experiments show that, for example, on a 10 MB DNA sequence, the compressed suffix tree takes 10p of the space of the normal suffix tree. At the same time, a representative algorithm is slowed down by factor 30. Our implementation follows the original proposal in spirit, but some internal parts are tailored toward practical implementation. Our construction algorithm has time requirement O(n log n log vΣv) and uses closely the same space as the final structure while constructing it: on the 10MB DNA sequence, the maximum space usage during construction is only 1.5 times the final product size. As by-products, we develop a method to create Succinct Suffix Array directly from Burrows-Wheeler transform and a space-efficient version of the suffixes-insertion algorithm to build balanced parentheses representation of suffix tree from LCP information.", "", "Suffix trees and suffix arrays are the most prominent full-text indices, and their construction algorithms are well studied. In the literature, the fastest algorithm runs in @math time, while it requires @math -bit working space, where @math denotes the length of the text. On the other hand, the most space-efficient algorithm requires @math -bit working space while it runs in @math time. It was open whether these indices can be constructed in both @math time and @math -bit working space. This paper breaks the above time-and-space barrier under the unit-cost word RAM. We give an algorithm for constructing the suffix array, which takes @math time and @math -bit working space, for texts with constant-size alphabets. Note that both the time and the space bounds are optimal. For constructing the suffix tree, our algorithm requires @math time and @math -bit working space for any @math . Apart from that, our algorithm can also be adopted to build other existing full-text indices, such as compressed suffix tree, compressed suffix arrays, and FM-index. We also study the general case where the size of the alphabet @math is not constant. Our algorithm can construct a suffix array and a suffix tree using optimal @math -bit working space while running in @math time and @math time, respectively. These are the first algorithms that achieve @math time with optimal working space. Moreover, for the special case where @math , we can speed up our suffix array construction algorithm to the optimal @math .", "This paper focuses on space efficient representations of rooted trees that permit basic navigation in constant time. While most of the previous work has focused on binary trees, we turn our attention to trees of higher degree. We consider both cardinal trees (or k-ary tries), where each node has k slots, labelled 1,...,k , each of which may have a reference to a child, and ordinal trees, where the children of each node are simply ordered. Our representations use a number of bits close to the information theoretic lower bound and support operations in constant time. For ordinal trees we support the operations of finding the degree, parent, ith child, and subtree size. For cardinal trees the structure also supports finding the child labelled i of a given node apart from the ordinal tree operations. These representations also provide a mapping from the n nodes of the tree onto the integers 1, ..., n , giving unique labels to the nodes of the tree. This labelling can be used to store satellite information with the nodes efficiently.", "We show that the compressed suffix array and the compressed suffix tree for a string of length n over an integer alphabet of size σ ≤ n can both be built in O(n) (randomized) time using only O(n log σ) bits of working space. The previously fastest construction algorithms that used O(n log σ) bits of space took times O(n log log σ) and O(n loge n) respectively (where e is any positive constant smaller than 1).", "We consider the implementation of abstract data types for the static objects: binary tree, rooted ordered tree and balanced parenthesis expression. Our representations use an amount of space within a lower order term of the information theoretic minimum and support, in constant time, a richer set of navigational operations than has previously been considered in similar work. In the case of binary trees, for instance, we can move from a node to its left or right child or to the parent in constant time while retaining knowledge of the size of the subtree at which we are positioned. The approach is applied to produce succinct representation of planar graphs in which one can test adjacency in constant time.", "We introduce new data structures for compressed suffix trees whose size are linear in the text size. The size is measured in bits; thus they occupy only O(n log|A|) bits for a text of length n on an alphabet A. This is a remarkable improvement on current suffix trees which require O(n log n) bits. Though some components of suffix trees have been compressed, there is no linear-size data structure for suffix trees with full functionality such as computing suffix links, string-depths and lowest common ancestors. The data structure proposed in this paper is the first one that has linear size and supports all operations efficiently. Any algorithm running on a suffix tree can also be executed on our compressed suffix trees with a slight slowdown of a factor of polylog(n)." ], "cite_N": [ "@cite_18", "@cite_33", "@cite_8", "@cite_41", "@cite_21", "@cite_19", "@cite_10" ], "mid": [ "2146346985", "", "2105633925", "2148113067", "1988110322", "2116365500", "2073921136" ] }
0
1908.04686
2967058823
We show how to build several data structures of central importance to string processing, taking as input the Burrows-Wheeler transform (BWT) and using small extra working space. Let @math be the text length and @math be the alphabet size. We first provide two algorithms that enumerate all LCP values and suffix tree intervals in @math time using just @math bits of working space on top of the input BWT. Using these algorithms as building blocks, for any parameter @math we show how to build the PLCP bitvector and the balanced parentheses representation of the suffix tree topology in @math time using at most @math bits of working space on top of the input BWT and the output. In particular, this implies that we can build a compressed suffix tree from the BWT using just succinct working space (i.e. @math bits) and any time in @math . This improves the previous most space-efficient algorithms, which worked in @math bits and @math time. We also consider the problem of merging BWTs of string collections, and provide a solution running in @math time and using just @math bits of working space. An efficient implementation of our LCP construction and BWT merge algorithms use (in RAM) as few as @math bits on top of a packed representation of the input output and process data as fast as @math megabases per second.
In this paper, we give new space-time trade-offs that allow building the CST's components in smaller working space (and in some cases even faster) with respect to the existing solutions. We start by combining 's algorithm @cite_38 with the suffix-tree enumeration procedure of Belazzougui @cite_21 to obtain an algorithm that enumerates (i) all pairs @math , and (ii) all suffix tree intervals in @math time using just @math bits of working space on top of the input BWT. We use this procedure to obtain algorithms that build (working space is on top of the input BWT and the output):
{ "abstract": [ "Many sequence analysis tasks can be accomplished with a suffix array, and several of them additionally need the longest common prefix array. In large scale applications, suffix arrays are being replaced with full-text indexes that are based on the Burrows-Wheeler transform. In this paper, we present the first algorithm that computes the longest common prefix array directly on the wavelet tree of the Burrows-Wheeler transformed string. It runs in linear time and a practical implementation requires approximately 2.2 bytes per character.", "We show that the compressed suffix array and the compressed suffix tree for a string of length n over an integer alphabet of size σ ≤ n can both be built in O(n) (randomized) time using only O(n log σ) bits of working space. The previously fastest construction algorithms that used O(n log σ) bits of space took times O(n log log σ) and O(n loge n) respectively (where e is any positive constant smaller than 1)." ], "cite_N": [ "@cite_38", "@cite_21" ], "mid": [ "2042175004", "1988110322" ] }
0
1908.04686
2967058823
We show how to build several data structures of central importance to string processing, taking as input the Burrows-Wheeler transform (BWT) and using small extra working space. Let @math be the text length and @math be the alphabet size. We first provide two algorithms that enumerate all LCP values and suffix tree intervals in @math time using just @math bits of working space on top of the input BWT. Using these algorithms as building blocks, for any parameter @math we show how to build the PLCP bitvector and the balanced parentheses representation of the suffix tree topology in @math time using at most @math bits of working space on top of the input BWT and the output. In particular, this implies that we can build a compressed suffix tree from the BWT using just succinct working space (i.e. @math bits) and any time in @math . This improves the previous most space-efficient algorithms, which worked in @math bits and @math time. We also consider the problem of merging BWTs of string collections, and provide a solution running in @math time and using just @math bits of working space. An efficient implementation of our LCP construction and BWT merge algorithms use (in RAM) as few as @math bits on top of a packed representation of the input output and process data as fast as @math megabases per second.
Also contribution ) improves the state-of-the-art, due to @cite_21 @cite_31 . In those papers, the authors show how to merge the BWTs of two texts @math and obtain the BWT of the collection @math in @math time and @math bits of working space for any @math [Thm. 7] belazzougui2016linear . When @math , this running time is the same as our result ), but the working space is much higher on small alphabets.
{ "abstract": [ "The field of succinct data structures has flourished over the last 16 years. Starting from the compressed suffix array (CSA) by Grossi and Vitter (STOC 2000) and the FM-index by Ferragina and Manzini (FOCS 2000), a number of generalizations and applications of string indexes based on the Burrows-Wheeler transform (BWT) have been developed, all taking an amount of space that is close to the input size in bits. In many large-scale applications, the construction of the index and its usage need to be considered as one unit of computation. Efficient string indexing and analysis in small space lies also at the core of a number of primitives in the data-intensive field of high-throughput DNA sequencing. We report the following advances in string indexing and analysis. We show that the BWT of a string @math can be built in deterministic @math time using just @math bits of space, where @math . Within the same time and space budget, we can build an index based on the BWT that allows one to enumerate all the internal nodes of the suffix tree of @math . Many fundamental string analysis problems can be mapped to such enumeration, and can thus be solved in deterministic @math time and in @math bits of space from the input string. We also show how to build many of the existing indexes based on the BWT, such as the CSA, the compressed suffix tree (CST), and the bidirectional BWT index, in randomized @math time and in @math bits of space. The previously fastest construction algorithms for BWT, CSA and CST, which used @math bits of space, took @math time for the first two structures, and @math time for the third, where @math is any positive constant. Contrary to the state of the art, our bidirectional BWT index supports every operation in constant time per element in its output.", "We show that the compressed suffix array and the compressed suffix tree for a string of length n over an integer alphabet of size σ ≤ n can both be built in O(n) (randomized) time using only O(n log σ) bits of working space. The previously fastest construction algorithms that used O(n log σ) bits of space took times O(n log log σ) and O(n loge n) respectively (where e is any positive constant smaller than 1)." ], "cite_N": [ "@cite_31", "@cite_21" ], "mid": [ "2522621913", "1988110322" ] }
0
1908.04628
2968100992
Many real-world prediction tasks have outcome (a.k.a. target or response) variables that have characteristic heavy-tail distributions. Examples include copies of books sold, auction prices of art pieces, etc. By learning heavy-tailed distributions, big and rare'' instances (e.g., the best-sellers) will have accurate predictions. Most existing approaches are not dedicated to learning heavy-tailed distribution; thus, they heavily under-predict such instances. To tackle this problem, we introduce ( L2P ), which exploits the pairwise relationships between instances to learn from a proportionally higher number of rare instances. L2P consists of two stages. In Stage 1, L2P learns a pairwise preference classifier: . In Stage 2, L2P learns to place a new instance into an ordinal ranking of known instances. Based on its placement, the new instance is then assigned a value for its outcome variable. Experiments on real data show that L2P outperforms competing approaches in terms of accuracy and capability to reproduce heavy-tailed outcome distribution. In addition, L2P can provide an interpretable model with explainable outcomes by placing each predicted instance in context with its comparable neighbors.
In real-world applications like search engines and recommendation systems, systems provide ranked lists tailored to users and their queries @cite_11 @cite_16 @cite_5 . In some cases, mapping those preferences into an ordinal variable leads to better user experience. Such tasks require the use of regression and multi-class classification methods @cite_40 .
{ "abstract": [ "With Web mail services offering larger and larger storage capacity, most users do not feel the need to systematically delete messages anymore and inboxes keep growing. It is quite surprising that in spite of the huge progress of relevance ranking in Web Search, mail search results are still typically ranked by date. This can probably be explained by the fact that users demand perfect recall in order to \"re-find\" a previously seen message, and would not trust relevance ranking. Yet mail search is still considered a difficult and frustrating task, especially when trying to locate older messages. In this paper, we study the current search traffic of Yahoo mail, a major Web commercial mail service, and discuss the limitations of ranking search results by date. We argue that this sort-by-date paradigm needs to be revisited in order to account for the specific structure and nature of mail messages, as well as the high-recall needs of users. We describe a two-phase ranking approach, in which the first phase is geared towards maximizing recall and the second phase follows a learning-to-rank approach that considers a rich set of mail-specific features to maintain precision. We present our results obtained on real mail search query traffic, for three different datasets, via manual as well as automatic evaluation. We demonstrate that the default time-driven ranking can be significantly improved in terms of both recall and precision, by taking into consideration time recency and textual similarity to the query, as well as mail-specific signals such as users' actions.", "We investigate the problem of predicting variables of ordinal scale. This task is referred to as ordinal regression and is complementary to the standard machine learning tasks of classification and metric regression. In contrast to statistical models we present a distribution independent formulation of the problem together with uniform bounds of the risk functional. The approach presented is based on a mapping from objects to scalar utility values. Similar to support vector methods we derive a new learning algorithm for the task of ordinal regression based on large margin rank boundaries. We give experimental results for an information retrieval task: learning the order of documents with respect to an initial query. Experimental results indicate that the presented algorithm outperforms more naive approaches to ordinal regression such as support vector classification and support vector regression in the case of more than two ranks.", "We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31 relative to the original performance.", "This paper presents an approach to automatically optimizing the retrieval quality of search engines using clickthrough data. Intuitively, a good information retrieval system should present relevant documents high in the ranking, with less relevant documents following below. While previous approaches to learning retrieval functions from examples exist, they typically require training data generated from relevance judgments by experts. This makes them difficult and expensive to apply. The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking. Such clickthrough data is available in abundance and can be recorded at very low cost. Taking a Support Vector Machine (SVM) approach, this paper presents a method for learning retrieval functions. From a theoretical perspective, this method is shown to be well-founded in a risk minimization framework. Furthermore, it is shown to be feasible even for large sets of queries and features. The theoretical results are verified in a controlled experiment. It shows that the method can effectively adapt the retrieval function of a meta-search engine to a particular group of users, outperforming Google in terms of retrieval quality after only a couple of hundred training examples." ], "cite_N": [ "@cite_5", "@cite_40", "@cite_16", "@cite_11" ], "mid": [ "2063774778", "2058475745", "2125771191", "2047221353" ] }
L2P: LEARNING TO PLACE FOR ESTIMATING HEAVY-TAILED DISTRIBUTED OUTCOMES
Heavy-tailed distributions are prevalent in real world data. By heavy-tailed, we mean a variable whose distribution has a heavier tail than the exponential distribution. Many real-world applications involve predicting heavy-tailed distributed outcomes. For example, publishers want to predict a book's sales number before its publication in order to decide the advance for the author, effort in advertisement, copies printed, etc [1]. Galleries are interested in an artist's selling potential to decide whether to represent the artist or not. For inventory planning, warehouses and shops would like to know what number of each item to keep in storage [2]. All of these real-world applications involve heavy-tailed distributed outcomes -book sales, art auction prices, and demands for items. The challenge for predicting heavy-tailed outcomes lies at the tail of the distributions, i.e., the "big and rare" instances such as best-sellers, high-selling art pieces and items with huge demands. Those instances are usually the ones that attract the most interests and create the most market values. Traditional approaches tend to under-predict the rare instances at the tail of the distribution. The limiting factor for prediction performance is the insufficient amount of training data on the rare instances. Approaches tackling class imbalance problem, such as over-sampling training instances [3], adjusting weights, and adding extra constraints [4,5,6,7] do not properly address the aforementioned problem, since those approaches assume homogeneously distributed groups with proportionally different sizes. In addition, defining distinct groups on a dataset with heavy-tailed outcomes is not trivial since the distribution is continuous. Therefore, arXiv:1908.04628v2 [cs.LG] 7 Jul 2021 predicting the values of heavy-tailed variables is not merely the class imbalance problem; instead it is the problem of learning a heavy-tailed distribution. We present an approach called Learning to Place (L2P) to estimate heavy-tailed outcomes and define performance measures for heavy-tailed target variables prediction. L2P learns to estimate a heavy-tailed distribution by first learning pairwise preferences between the instances and then placing the new instance within the known instances and assigning an outcome value. Our contributions are as follows: 1. We introduce Learning to Place (L2P) to estimate heavy-tailed outcomes by (i) learning from pairwise relationships between instances and (ii) placing new instances among the training data and predicting outcome values based on those placements. 2. We present appropriate statistical metrics for measuring the performance for learning heavy-tailed distributions. 3. In an exhaustive empirical study on real-world data, we demonstrate that L2P is robust and consistently outperforms various competing approaches across diverse real-world datasets. 4. Through case studies, we demonstrate that L2P not only provides accurate predictions but also is interpretable. The outline of the paper is as follows. Section 2 presents our proposed method. Section 3 describes our experiments. Section 4 explains how our method produces models that are interpretable. Section 5 contains related work and a discussion of our findings. The paper concludes in Section 6. 2 Proposed Method: Learning to Place (L2P) L2P takes as input a data matrix where the rows are data instances (e.g., books) and the columns are features that describe each instance (e.g., author, publisher, etc). Each data instance also has a value for the predefined target variable (e.g., copies of book sold). L2P learns to map each instance's feature vector to the value for its target variable, which is the standard supervised learning setup. However, the challenges that L2P addresses are as follows. First, it learns the heavy-tailed distribution of the target variable; and thus it does not under-predict the "big and rare" instances. Second, it generates an interpretable model for the human end-user (e.g., a publisher). Figure 1 describes the training and placing phases for L2P. In the training phase, L2P learns a pairwise-relationship classifier, which predicts whether the target variable for an instance A (I A ) is greater (or less) than another instance B (I B ). To predict outcomes in the placing phase, the unplaced instance is compared with each training instance using the model learned in the training phase, generating pairwise-relationships between the unplaced instance and the training instances. These pairwise-relationships are then used as "votes" to predict the target outcome of the new instance. Figure 1: Overview of the L2P Algorithm. In the training phase, L2P learns a classifier C on the pairwise-relationship between each pair of training instances. In the placing phase, L2P applies the classifier C to obtain the pairwise relationships between unplaced instance q and all of the training instances. Then, each instance in the training set contributes to the placement of unplaced instance q by voting on bins to its left or to its right depending on the predicted relation between instances. The mid-value of the bin with the highest vote is assigned as the prediction. Training Phase. For each pair of instances i and j with feature vector f i and f j , L2P concatenates the two feature vectors X ij = [f i , f j ]. 1 If i's target variable is greater than j's, then y ij = 1; otherwise, y ij = −1 (ties are ignored in the training phase). Formally, denoting with t i the target variable for instance i and with S the set of instances in the training set, L2P generates the following training data: X ij = [f i , f j ], for each(i, j) ∈ S × S, i = j , t i = t j ,(1)y ij = 1, t i > t j −1, t i < t j .(2) Then a classifier C is trained on the training data X ij and labels y ij . 2 It is important to note that the trained classifier may produce conflicting results: for example, I A < I B and I B < I C but I C < I A . In the Experiments section, we demonstrate the robustness of L2P to such conflicts rooted from the pairwise classification error. Placing Phase. The placing phase consists of two stages. In Stage I, for each unplaced instance q, L2P obtains X iq = [f i , f q ], for each i ∈ S (recall S is the training set). Then, L2P applies the classifier C on X iq to get the predicted pairwise relationship between the test instance q and all training instances (ŷ iq = C(X iq )). In Stage II, L2P treats each training instance as a "voter". Training instances (voters) are sorted by their target variables in descending order, dividing the target variable axis into bins. Ifŷ iq = 1, bins on the right of t i will obtain an upvote (+1) and bins on the left of t i will obtain a downvote (-1) and vice versa forŷ iq = −1. After the voting process, L2P takes the bin with the most "votes" as the predicted bin for test instance q, and obtains the predictiont q as the midpoint of this bin. Theoretical Analysis of Voting Process L2P's voting process can be viewed as the maximum likelihood estimation (MLE) of the optimal placement of an instance based on the pairwise relationships. Given the test instance q, our goal is to find its optimal bin m. For any bin b, we have: P (b|q) ∝ P (q|b) × P (b). Since each train instance i contributes to P (q|b), we have P (q|b) = 1 Z i∈Strain P i (q|b), where P i (q|b) is the conditional probability of test instance q placing in the given bin b based on its pairwise relationship with training instance i; and Z is the normalization factor, Z = b i P i (q|b). L2P assigns two probabilities to each pair of training instance i and test instance q: p l i (q) and p r i (q), denoting the probability that the test instance q is smaller than (i.e., to the left of) or larger than (i.e., to the right of) training instance i respectively, where p l i (q) + p r i (q) = 1. 3 Let R i b ∈ {l, r} be the region defined by training instance i for bin b, and |R i b | as the number of bins in this region. We know P i (q|b) = p R i b i (q)/|R i b | assuming the test instance is equally probable to fall in each bin in region R i m . 4 Therefore, the optimal bin is m = argmax b 1 Z i∈S (p R i b i (q)/|R i b |). We observe that p R i b i (q)/|R i b | is the "votes" the training instance i gives to bin b for test instance q, therefore the optimal bin m is the one with the most "votes". 5 L2P can incorporate any method that takes pairwise preferences and learns to place a test instance among the training instances. Specifically, we examined SpringRank [8], FAS-PIVOT [9] and tournament graph related heuristics [10]. We found that the performances of these approaches are quite similar to voting. However, voting -with its linear runtime complexity -is the most computationally efficient method among them. Complexity Analysis Suppose n is the number of instances in the training set. The vanilla training phase for L2P learns pairwise relationships among all pairs in the training set, leading to a O(n 2 ) runtime complexity, which is computationally prohibitive for large datasets even though the training phase is an offline process and can be easily parallelized. Thus, we implemented an efficient approach based on the intuition that it is easier for a classifier to learn the pairwise relationship between instances that are far apart than instances that are near each other. Specifically, we define two parameters: n s denoting the number of samples to compare with for each training instance and k denoting the number of instances that are considered near to each training instance. For comparison to each training instance i, L2P's efficient training phase algorithm (1) takes all k near instances of i and (2) uniformly at random samples n s − k non-near instances of i. The nearness of two instances is measured by the difference in their target values. Our experiments with the efficient implementation of L2P lead to similar AUC scores for the overall prediction task, but reduced the runtime complexity to O(n s × n), where n s << n. For instance, n s is 20% of n. L2P's placing phase has a complexity of O(n) for each new (i.e., test) instance. Algorithm 1: L2P's Training Algorithm Input: Training data S = (F, t) Output: Pairwise relationship classifier C // Feature matrix X = [ ]; // Label vector y = [ ]; for i ← 1 to |S| do for j ← i + 1 to |S| do X.append([f i , f j ]); if t i > t j then y.append(1) else if t i < t j then y.append(-1) end end C.train(X, y); return C Algorithm 2: L2P's Placing Algorithm Input: Classifier C, Training data S=(F , t), Unplaced instance q represented by its features f q Output: t q = predicted value for test instance q B = [ ]; // Unique target values, highest to lowest bins = sort(unique(t)); for i ← 1 to |bins| do B[i] = 0 end for i ← 1 to |F | dô y iq = C.predict([f i , f q ]) for j ← 1 to BinEdgeIndex(t i ) -1 do B[j] -=ŷ iq ; end for j ← BinEdgeIndex(t i ) to |B| do B[j] +=ŷ iq ; end end b = GetHighestBin(B); t q = Mean(bins[b-1], bins[b]); return t q Experiments In this section, we describe the data used in our experiments, the baseline and competing approaches, experimental methodology, evaluation metrics, and results. Datasets We present results on the following real-world applications: Book sales: This dataset consists of information about all print nonfiction and fiction books published in the United States in 2015, including features about authors publication history, book summary, authors popularity (details of features see [1]). The goal is to predict the one year book sales using features prior to the book's publication. We separate nonfiction books and fiction books in the experiment. Art auctions: This dataset combines information on artists exhibits, auction sales, and primary market quotes. It was previously used to quantify success of artists based on their trajectories of exhibitions [11]. We select 7,764 paintings using vertical logarithmic binning [12]. The features includes previous exhibition records (number of exhibitions, number of exhibitions at different grade), sales records (number of art pieces sold, various statistics of price of previous sold pieces), career length and medium information (full feature list see Supplementary Information Table 1). The prediction task is to predict auction price of an art piece based on artists' previous sale and exhibition history. Table 1 provides the summary statistics of these datasets. Specifically, we calculate the kurtosis for each target variable. Kurtosis measures the "tailedness" of the probability distribution. The kurtosis of any univariate normal distribution is 3, and the higher the kurtosis is, the heavier the tails. Complementary cumulative function (CCDF) of real outcomes are shown in Fig. 2 Baseline and Competing Approaches To compare the predictive capabilities of L2P, we experiment with these baseline approaches from the literature. K-nearest neighbors (kNN). The prediction is obtained through local interpolation of nearest neighbors in the training set. Kernel regression (KR). We employ Ridge regression with RBF kernels to estimate the non-linear relationship between variables. Heavy-tailed linear regression (HLR). Hsu and Sabato [13] proposed a heavy-tailed regression model, in which a median-of-means technique is utilized. They proved that for a d-dimensional estimator, a random sample of sizẽ O(d log(1/δ)) is sufficient to obtain a constant factor approximation to the optimal loss with probability 1 − δ. However, this approach is not able to capture non-linear relationships, which exist in our setting. Neural networks (NN). We use a multi-layer perceptron regressor model. Neural networks sacrifice interpretability for predictive accuracy. XGBoost (XGB). We use the implementation of gradient boosting optimized on various tasks such as regression and ranking [14]. LambdaMART. We compare with the well-known learning-to-rank algorithm LambdaMART 6 [16]. LambdaMART combines LambdaRank [17] and MART (Multiple Additive Regression Trees) [18]. While MART uses gradient boosted decision trees for prediction tasks, LambdaMART uses gradient boosted decision trees using a cost function derived from LambdaRank for solving a ranking task. Here we choose to optimize LambdaMART based on ranking AUC, which is an appropriate metric for our task. LambdaMART is designed to predict the ranking of a list of items; here we predict the instance by: (1) using LambdaMART to get the ranking of the new instance plus the training instances and (2) predicting the value as the midpoint of actual values of adjacent ranked instances. 7 Random (RDM). We shuffle actual outcomes at random and assign those random outcomes as predictions. Experimental Setup and Evaluation Metrics We impose standard scaling on all the columns of the data matrix and the target variable. For all competing methods, we first take the logarithm of the entries in the data matrix and the target variable, and then do standard scaling. For all methods, we employ 5-fold stratified cross-validation to estimate confidence of model performance. For all baseline models, we tune the model parameters to near-optimal performance (parameters listed in Supplementary Information Table 2). For L2P, however, we use the scikit-learn default parameters of random forest classifier, with 100 trees and Gini impurity as split criteria. Under our problem definition, the model with the best performance will (1) reproduce heavy-tailed outcome distribution and (2) predict instances accurately especially the "big and rare instances". Traditional regression metrics are not of good fit in this problem setting. For example R 2 has the assumption of normally distributed error, which is not the case for heavy-tailed distributions; root mean square error (RMSE) will be dominated by the errors on the high end since the values on high-end are extreme. Instead, we use the following metrics that are more appropriate under our circumstances for evaluation: Quantile-quantile (Q-Q) plot. Q-Q plot can visually present the deviations between true and predicted target variable distributions. A model who can reproduce the outcome distribution should produce curves closer to y = x line. We can also investigate when predicted quantiles deviate from this line. Kolmogorov-Smirnov statistic (KS) and Earth mover distance (EMD). KS statistic and EMD are two commonly used measure on the distance between two underlying probability distributions. Smaller KS and EMD indicates higher similarity between distributions; and in our analysis, it indicates a better prediction of the underlying distribution. Receiver operating characteristic (ROC). We calculate true-positive and false-positive rates in order to compute the ROC curve and the area under the ROC curve (a.k.a. AUC score). We adapt AUC calculation to regression setting by calculating the true-positive and false-positive rates at different thresholds (all possible actual values). True positive rate at threshold. The calculation of true-positive rate (TPR) at threshold follows the recall@k measure used in information retrieval literature. Here, we measure the fraction of instances with true value (y) higher than threshold t that also have predicted values (ŷ) higher than t for calculating TPR. We followed similar approach to compute FPR as in Eq.3. T P R@t = |{ŷ i ≥ t, y i ≥ t}| |{y i ≥ t}| F P R@t = |{ŷ i ≥ t, y i < t}| |{y i < t}|(3) For various thresholds, we compute corresponding TPR and FPR scores to create ROC curve. Similar to traditional ROC curves, a better performing method would have a curve that is simultaneously improving both TPRs and FPRs, leading to a perfect score of AUC = 1. A random model leads to an AUC performance of 0.5 with the corresponding ROC curve being the 45-degree line indicating that TPR and FPR are equal for various thresholds. It is important to note that each individual measure alone is not sufficient to judge the goodness of a model. KS, EMD and Q-Q plot are measuring the reproducibility of the heavy-tailed distribution, but are not able to measure the prediction accuracy for each instance. Downside of comparing distribution is that the random prediction will end up having the perfect Q-Q plot and KS = 0, EMD = 0. AUC is measuring the accuracy of the prediction, but does not address model's ability to reproduce the distribution. We aim to harness benefits of different measures to assess quality of our models. Experimental Results Next we detail performance comparisons and robustness analysis. 7 LambdaMART code is obtained from https://github.com/jma127/pyltr. Figure 3: Experimental results. We compare L2P's performance against 6 other methods (discussed in Section 3.2) across 3 datasets (see Table 1). L2P's objective is to accurately reproduce the distribution of an outcome variable and accurately predict its value. Q-Q plots (left column) show how predicted and actual quantiles align. L2P reproduces underlying distributions that are accurate at both lower and higher quantiles compared to the other methods. Kolmogorov-Smirnov (KS) statistics and Earth mover distance (EMD) (middle column) measures distance between predicted and actual distributions. The lower the values for KS and EMD, the better. L2P consistently outperforms the other methods. AUC (right column) measures prediction performance. We group methods to tiers based on the mean and standard deviation of the score. L2P achieves top tier performance all datasets. In summary, the only approach that achieves top performance on both reproducing the outcome variable distribution and accurate predictions across the datasets is L2P. Performance comparison study We compare the methods using different measurement to showcase that L2P can accurately estimate the heavy-tailed distribution and predict the value of the heavy-tailed variable. Q-Q plot. In left column in Fig. 3 shows the Q-Q plot of predicted outcomes. In all datasets we see deviation at the high-end (where "big and rare" instances reside). L2P is among the top 3 methods that produce the smallest deviation at the high end for all datasets. LambdaMART is also competitive in producing small deviation, but it produces larger deviations at the low end than L2P. KS and EMD. The second column of Fig. 3 presents the KS and EMD scores. Outcomes of L2P leads to the smallest KS statistics and second lowest EMD for almost all datasets. LambdaMART shows an advantage on minimizing EMD; however, we will show in AUC score comparison that it is not a preferable method. AUC. The third column of Fig. 3 shows the AUC score of various methods on different datasets. L2P achieves top tier performance on nonfiction, art and fiction dataset, which all have high kurtosis values. XGB and NN have competitive AUCs to L2P but are bad at reproducing the true distributions as shown previously. We notice that LambdaMART, which has good performance EMD, has very low AUC score, indicating its inability to have accurate prediction on each instance. We further investigate the possible reasons of the unsatisfying prediction of LambdaMART. Figure 4 shows the ranking results for LambdaMART on the fiction dataset, where we trained and tested on the full dataset. We found that LamdaMART did not correctly rank items that are close to each other. This failure of preserving the rank among training instances is the root cause of why LambdaMART fails to achieve good performance on the new instances prediction. LambdaMART is trained and tested on the full fiction dataset. We observe that even when trained on the same dataset, LambdaMART's ranking output is not accurate. While it exhibits general trends around the 45-degree line, the neighborhood ranking is not restored, which is the root cause of why LambdaMART fails to achieve good performance on predictions of new instances. Takeaway message. With comprehensive consideration of all three evaluations, we can see that L2P is the best method in both reproducing the underlying heavy-tailed distribution and providing accurate predictions. Robustness analysis In the placing phase of L2P there are two stages: 1) Stage I: obtaining pairwise relationships between test instance and training instances and 2) Stage II: placing the test instance and obtaining the prediction through voting. Previously we showed that voting itself is a maximum likelihood estimation, therefore the performance of L2P is highly depends on the performance of pairwise relationship learning. Here, we investigate the robustness w.r.t. classification error of pairwise relationships. To quantify the error tolerance of "voting" and estimation for a new instance, we conduct a set of experiments where we introduce errors on predicting pairwise relationships. We simulate the pairwise relationship error with two mechanisms: (i) random error: constant probability p = p c flips the label for each pair, (ii) distance-dependent error: probability of error is proportional to the true ranking percentile difference between items; here we use the percentile of the ranking because the sizes of the datasets vary. We define the flipping probability as p ij = e −α|ri−rj | , assuming it would be easier to learn the pairwise relationship for items that are further away. This is observed in our experiments as introduced to L2P's classifier for pairwise relationships. We observe significantly high tolerance towards random error and gradual degradation in L2P's overall performance with distance-dependent errors. well. For example, in nonfiction data, we notice that more than 48% of the pairwise relationship error occurs in item pairs that have a ranking percentile difference smaller than 10. We can control the rate of errors introduced by the two mechanisms by tuning p c or α. In Figure 5, we present the overall performance (AUC) of L2P when various degrees of errors are introduced to the system in the pairwise relationships. First thing to notice is that if the pairwise relationship has no error (see left panel of Figure 5 when classifier accuracy is 1), L2P has an accurate prediction, showing that the performance of the voting stage is only influenced by the quality of the pairwise relationships learned by the model. Moreover, the voting stage can actually compensate errors in pairwise relationships. We observe that error tolerance is significantly high towards random error: performance of L2P is stable until more than 45% of the pairwise relationships are mistaken. For distance-dependent mechanism to simulate errors, we observe robust performance that 30% error in stage I predictions resulting just 20% reduction of the overall performance. Model Interpretability As mentioned earlier, one of the advantages of L2P's methodology is its interpretability. Let us present the nonfiction book Why not me? by Mindy Kaling as an example. L2P prediction is about 218,000 copies while the actual sales is about 230,000. The key features explaining the success of this particular book are the author's popularity and the previous sales of author -6,228,182 pageviews and about 638,000 copies, respectively. However, performance of neural network leads to significant under-prediction of 16,000 sales and it's not clear why. L2P places Why not me? between Selp-Helf by Miranda Sings and Big Magic by Elizabeth Gilbert. Selp-Helf has the author popularity as 1,390,000 and the author has no prior publishing history, while Big Magic has the author popularity as 1,596,000 and previous sales as 6,954,000. We see that Why not me? has higher author popularity than Big Magic and Selp-Helf, but since it has a lower publishing history than Big Magic, L2P places it between these two books. We also want to demonstrate an example case where L2P fails to achieve accurate prediction. The nonfiction book The Best Loved Poems of Jacqueline Kennedy Onasis by Caroline Kennedy under Grand Central Publishing, with claimed publication year 2015, is predicted to sell 53,000 copies while the actual sales is 180 copies in the dataset. After an extensive analysis, it turns out that the book was initially published in 2001 and was a New York Times bestseller, which L2P captures its potential and predict high sales. Therefore this incorrect prediction is rooted in data error and our overprediction can be attributed to the initial editions performance as being a best-seller. Neural network predicts 7,150 copies, though closer to the actual sale 180. Objectives L2P kNN KR HLR NN XGB LambdaMART Although L2P is designed for predicting heavy-tailed outcomes, which is different from learning-to-rank, methodological contributions show some parallels with the existing ranking algorithms. Cohen et al. [10] proposed a two-phase approach that learns from preference judgments and subsequently combines multiple judgments to learn a ranked list of instances. Similarly, RankSVM [15] is a two-phase approach that translates learning weights for ranking functions into SVM classification. Both of these approaches have complexity O(n 2 ), which is computationally expensive. In experiment practice, we found that learning-to-rank is not satisfying to predict heavy-tailed distributed outcomes. Since the learning-to-rank algorithm cannot guarantee to recover the rankings of the training instances completely, it will end up producing worse prediction on the new instance. Heavy-tailed regression. Regression problems are known to suffer from under-predicting rare instances [7]. Approaches were proposed to correct fitting models consider prior correction that introduces terms capturing a fraction of rare events in the observations and weighting the data to compensate for differences [4,6]. Hsu and Sabato [13] proposed a methodology for linear regression with possibly heavy-tailed responses. They split data into multiple pieces, repeat the estimation process several times, and select the estimators based on their performance. They analytically prove that their method can perform reasonably well on heavy-tailed datasets. Quantile regression related approaches are proposed as well. Wang et al. [19] proposed estimating the intermediate conditional quantiles using conventional quantile regression and extrapolating these estimates to capture the behavior at the tail of the distribution. Robust Regression for Asymmetric Tails (RRAT) [20] was proposed to address the problem of asymmetric noise distribution by using conditional quantile estimators. Zhang and Zhou [21] considered linear regression with heavy-tail distributions and showed that using l1 loss with truncated minimization can have advantages over l2 loss. Like all truncated based approaches, their method requires prior knowledge of distributional properties. However, none of these regression techniques can capture non-linear decision boundaries. Ordinal Regression. The idea behind L2P methodology is similar to an ordinal regression where each training instance is mapped to an ordinal scale. Previous research has explored ordinal regression using binary classification [22]. The contribution of L2P is that it transforms the prediction of heavy-tailed outcomes to ordinal regression using pairwise-relationship classification followed by an MLE-based voting method. By doing this two-phase approach, L2P is able to reproduce the distribution of the outcome variable and provide accurate predictions for the outcome variable. Combining these two tasks leads consistently to better performance all-around. Imbalance Learning Data imbalance, as a common issue in machine learning, has been widely studied, especially in classification space. In [23], the problem of imbalance learning is defined as instances have different importance value based on user preference. There are in generally three categories of methods tackling this problem: data preprocessing [3,24], special-purpose learning methods [25,26] and prediction post-processing [27,28]. However, one should notice that learning heavy-tailed distributed attributes is different from imbalance learning. In most imbalance learning, there is a majority group and a minority group, but within group items are mostly homogeneous. However in heavy-tailed distribution, there is no clear cut to define majority/minority group and even if forcing a threshold to form majority/minority group, within each group, the distribution is still heavy-tailed. Additionally, one need to choose a pre-defined relevance function for a lot of methods designed in this space. Efficient algorithm for pairwise learning. Qian et al. proposed using a two-step hashing framework to retrieve relevant instances and nominate pairs whose ranking is uncertain [29]. Other approaches to efficiently searching for similar pairs and approximately learning pairwise distances are proposed in the literature for information retrieval and image search [30,31,32]. L2P can use any robust method that learns pairwise preferences for its pairwise relationships learning. Conclusions We presented the L2P algorithm, which satisfies three desired objectives consistently: (1) modeling heavy-tail distribution of an outcome variable, (2) accurately making predictions for the heavy-tailed outcome variable, and (3) producing an interpretable ML algorithm as summarized in Table 2. Through learning pairwise relationships following by an MLE-based voting method, L2P preserves the heavy-tailed nature of the outcome variables and avoids under-prediction of rare instances. We observed the following: 1. L2P accurately reproduces the heavy-tailed distribution of the outcome variable and accurately predicts of that variable. Our experimental study, which included 6 competing methods and 3 datasets, demonstrates that L2P consistently outperforms other methods across various performance measures, including accurate estimation of both lower and higher quantiles of the outcome variable distribution, lower Kolmogorov-Smirnov statistic and Earth Mover Distance, and higher AUC. 2. L2P's performance is robust when errors are introduced in the pairwise-relationship classifier. Under random error setting, L2P can achieve almost perfect performance up to 45% error in pairwise relationship predictions; and under distance-dependent error setting, L2P has an accuracy drop of only 20% with 30% pairwiserelationship error. 3. L2P is an interpretable approach and it provides prediction context. L2P allows one to investigate each prediction by comparing with neighboring instances and their corresponding feature values to obtain more context on the outcome. This is highly important to practitioners such as book publishers, where executives need reasons before making a huge investment. Supplementary Information July 8, 2021 1 Supplementary Material Reproducibility The code for Python implementation of L2P method is freely available from https://github.com/xindi-dumbledore/L2P. Codes for baseline methods for the experiments are also included. Table 1 lists the features for the art dataset and its descriptions. The data includes exhibition grade, which ranges from A (the top grade) to D (the bottom grade). To calculate average grade, we use the following assignments: A = 4, B = 3, C = 2 and D = 1. Table 2 lists the parameters used for each baseline methods across different dataset. We performed a grid search on the parameters for each baseline method and select the one that produced the best performance. Median sold price of art pieces of the same medium medium percentile90 90th percentile of price of art pieces of the same medium medium percentile75 75th percentile of price of art pieces of the same medium medium percentile25 25th percentile of price of art pieces of the same medium medium percentile10 Feature Description of Art Dataset Parameters Selected for Different Baseline Methods and Datasets 10th percentile of price of art pieces of the same medium medium std standard deviation of price of art pieces of the same medium
5,394
1908.04628
2968100992
Many real-world prediction tasks have outcome (a.k.a. target or response) variables that have characteristic heavy-tail distributions. Examples include copies of books sold, auction prices of art pieces, etc. By learning heavy-tailed distributions, big and rare'' instances (e.g., the best-sellers) will have accurate predictions. Most existing approaches are not dedicated to learning heavy-tailed distribution; thus, they heavily under-predict such instances. To tackle this problem, we introduce ( L2P ), which exploits the pairwise relationships between instances to learn from a proportionally higher number of rare instances. L2P consists of two stages. In Stage 1, L2P learns a pairwise preference classifier: . In Stage 2, L2P learns to place a new instance into an ordinal ranking of known instances. Based on its placement, the new instance is then assigned a value for its outcome variable. Experiments on real data show that L2P outperforms competing approaches in terms of accuracy and capability to reproduce heavy-tailed outcome distribution. In addition, L2P can provide an interpretable model with explainable outcomes by placing each predicted instance in context with its comparable neighbors.
Regression problems are known to suffer from under-predicting rare instances @cite_17 . Approaches proposed to correct fitting models consider prior correction that introduces terms capturing a fraction of rare events in the observations and weighting the data to compensate for differences @cite_0 @cite_4 . Hsu and Sabato @cite_27 proposed a methodology for linear regression with possibly heavy-tailed responses. They split data into multiple pieces, repeat the estimation process several times, and select the estimators based on their performance. They analytically prove that their method can perform reasonably well on heavy-tailed datasets. Quantile regression related approaches are proposed as well. Wang @cite_24 proposed estimating the intermediate conditional quantiles using conventional quantile regression and extrapolating these estimates to capture the behavior at the tail of the distribution. Robust Regression for Asymmetric Tails (RRAT) @cite_33 was proposed to address the problem of asymmetric noise distribution by using conditional quantile estimators. Zhang and Zhou @cite_20 considered linear regression with heavy-tail distributions and showed that using @math loss with truncated minimization can have advantages over @math loss. Like all truncated based approaches, their method requires prior knowledge of distributional properties. None of these regression techniques can capture non-linear decision boundaries.
{ "abstract": [ "Disease and trait-associated variants represent a tiny minority of all known genetic variation, and therefore there is necessarily an imbalance between the small set of available disease-associated and the much larger set of non-deleterious genomic variation, especially in non-coding regulatory regions of human genome. Machine Learning (ML) methods for predicting disease-associated non-coding variants are faced with a chicken and egg problem - such variants cannot be easily found without ML, but ML cannot begin to be effective until a sufficient number of instances have been found. Most of state-of-the-art ML-based methods do not adopt specific imbalance-aware learning techniques to deal with imbalanced data that naturally arise in several genome-wide variant scoring problems, thus resulting in a significant reduction of sensitivity and precision. We present a novel method that adopts imbalance-aware learning strategies based on resampling techniques and a hyper-ensemble approach that outperforms state-of-the-art methods in two different contexts: the prediction of non-coding variants associated with Mendelian and with complex diseases. We show that imbalance-aware ML is a key issue for the design of robust and accurate prediction algorithms and we provide a method and an easy-to-use software tool that can be effectively applied to this challenging prediction task.", "In the presence of a heavy-tail noise distribution, regression becomes much more difficult. Traditional robust regression methods assume that the noise distribution is symmetric, and they downweight the influence of so-called outliers. When the noise distribution is asymmetric, these methods yield biased regression estimators. Motivated by data-mining problems for the insurance industry, we propose a new approach to robust regression tailored to deal with asymmetric noise distribution. The main idea is to learn most of the parameters of the model using conditional quantile estimators (which are biased but robust estimators of the regression) and to learn a few remaining parameters to combine and correct these estimators, to minimize the average squared error in an unbiased way. Theoretical analysis and experiments show the clear advantages of the approach. Results are on artificial data as well as insurance data, using both linear and neural network predictors.", "The purpose of this study is to use the truncated Newton method in prior correction logistic regression (LR). A regularization term is added to prior correction LR to improve its performance, which results in the truncated-regularized prior correction algorithm. The performance of this algorithm is compared with that of weighted LR and the regular LR methods for large imbalanced binary class data sets. The results, based on the KDD99 intrusion detection data set, and 6 other data sets at both the prior correction and the weighted LRs have the same computational efficiency when the truncated Newton method is used in both of them. A higher discriminative performance, however, resulted from weighting, which exceeded both the prior correction and the regular LR on nearly all the data sets. From this study, we conclude that weighting outperforms both the regular and prior correction LR models in most data sets and it is the method of choice when LR is used to evaluate imbalanced and rare event data.", "Estimation of conditional quantiles at very high or low tails is of interest in numerous applications. Quantile regression provides a convenient and natural way of quantifying the impact of covariates at different quantiles of a response distribution. However, high tails are often associated with data sparsity, so quantile regression estimation can suffer from high variability at tails especially for heavy-tailed distributions. In this article, we develop new estimation methods for high conditional quantiles by first estimating the intermediate conditional quantiles in a conventional quantile regression framework and then extrapolating these estimates to the high tails based on reasonable assumptions on tail behaviors. We establish the asymptotic properties of the proposed estimators and demonstrate through simulation studies that the proposed methods enjoy higher accuracy than the conventional quantile regression estimates. In a real application involving statistical downscaling of daily precipitation in...", "", "In this paper, we consider the problem of linear regression with heavy-tailed distributions. Different from previous studies that use the squared loss to measure the performance, we choose the absolute loss, which is more robust in the presence of large prediction errors. To address the challenge that both the input and output could be heavy-tailed, we propose a truncated minimization problem, and demonstrate that it enjoys an O( √ d n ) excess risk, where d is the dimensionality and n is the number of samples. Compared with traditional work on l1-regression, the main advantage of our result is that we achieve a high-probability risk bound without exponential moment conditions on the input and output. Furthermore, if the input is bounded, we show that the classical empirical risk minimization is competent for l1-regression even when the output is heavy-tailed.", "We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros (“nonevents”). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quarter-million dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all available events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99 of their (nonfixed) data collection costs or to collect much more meaningful explanatory variables. We provide methods that link these two results, enabling both types of corrections to work simultaneously, and software that implements the methods developed." ], "cite_N": [ "@cite_4", "@cite_33", "@cite_0", "@cite_24", "@cite_27", "@cite_20", "@cite_17" ], "mid": [ "2620563544", "2112744712", "2743594598", "2137922946", "", "2963290535", "2163848058" ] }
L2P: LEARNING TO PLACE FOR ESTIMATING HEAVY-TAILED DISTRIBUTED OUTCOMES
Heavy-tailed distributions are prevalent in real world data. By heavy-tailed, we mean a variable whose distribution has a heavier tail than the exponential distribution. Many real-world applications involve predicting heavy-tailed distributed outcomes. For example, publishers want to predict a book's sales number before its publication in order to decide the advance for the author, effort in advertisement, copies printed, etc [1]. Galleries are interested in an artist's selling potential to decide whether to represent the artist or not. For inventory planning, warehouses and shops would like to know what number of each item to keep in storage [2]. All of these real-world applications involve heavy-tailed distributed outcomes -book sales, art auction prices, and demands for items. The challenge for predicting heavy-tailed outcomes lies at the tail of the distributions, i.e., the "big and rare" instances such as best-sellers, high-selling art pieces and items with huge demands. Those instances are usually the ones that attract the most interests and create the most market values. Traditional approaches tend to under-predict the rare instances at the tail of the distribution. The limiting factor for prediction performance is the insufficient amount of training data on the rare instances. Approaches tackling class imbalance problem, such as over-sampling training instances [3], adjusting weights, and adding extra constraints [4,5,6,7] do not properly address the aforementioned problem, since those approaches assume homogeneously distributed groups with proportionally different sizes. In addition, defining distinct groups on a dataset with heavy-tailed outcomes is not trivial since the distribution is continuous. Therefore, arXiv:1908.04628v2 [cs.LG] 7 Jul 2021 predicting the values of heavy-tailed variables is not merely the class imbalance problem; instead it is the problem of learning a heavy-tailed distribution. We present an approach called Learning to Place (L2P) to estimate heavy-tailed outcomes and define performance measures for heavy-tailed target variables prediction. L2P learns to estimate a heavy-tailed distribution by first learning pairwise preferences between the instances and then placing the new instance within the known instances and assigning an outcome value. Our contributions are as follows: 1. We introduce Learning to Place (L2P) to estimate heavy-tailed outcomes by (i) learning from pairwise relationships between instances and (ii) placing new instances among the training data and predicting outcome values based on those placements. 2. We present appropriate statistical metrics for measuring the performance for learning heavy-tailed distributions. 3. In an exhaustive empirical study on real-world data, we demonstrate that L2P is robust and consistently outperforms various competing approaches across diverse real-world datasets. 4. Through case studies, we demonstrate that L2P not only provides accurate predictions but also is interpretable. The outline of the paper is as follows. Section 2 presents our proposed method. Section 3 describes our experiments. Section 4 explains how our method produces models that are interpretable. Section 5 contains related work and a discussion of our findings. The paper concludes in Section 6. 2 Proposed Method: Learning to Place (L2P) L2P takes as input a data matrix where the rows are data instances (e.g., books) and the columns are features that describe each instance (e.g., author, publisher, etc). Each data instance also has a value for the predefined target variable (e.g., copies of book sold). L2P learns to map each instance's feature vector to the value for its target variable, which is the standard supervised learning setup. However, the challenges that L2P addresses are as follows. First, it learns the heavy-tailed distribution of the target variable; and thus it does not under-predict the "big and rare" instances. Second, it generates an interpretable model for the human end-user (e.g., a publisher). Figure 1 describes the training and placing phases for L2P. In the training phase, L2P learns a pairwise-relationship classifier, which predicts whether the target variable for an instance A (I A ) is greater (or less) than another instance B (I B ). To predict outcomes in the placing phase, the unplaced instance is compared with each training instance using the model learned in the training phase, generating pairwise-relationships between the unplaced instance and the training instances. These pairwise-relationships are then used as "votes" to predict the target outcome of the new instance. Figure 1: Overview of the L2P Algorithm. In the training phase, L2P learns a classifier C on the pairwise-relationship between each pair of training instances. In the placing phase, L2P applies the classifier C to obtain the pairwise relationships between unplaced instance q and all of the training instances. Then, each instance in the training set contributes to the placement of unplaced instance q by voting on bins to its left or to its right depending on the predicted relation between instances. The mid-value of the bin with the highest vote is assigned as the prediction. Training Phase. For each pair of instances i and j with feature vector f i and f j , L2P concatenates the two feature vectors X ij = [f i , f j ]. 1 If i's target variable is greater than j's, then y ij = 1; otherwise, y ij = −1 (ties are ignored in the training phase). Formally, denoting with t i the target variable for instance i and with S the set of instances in the training set, L2P generates the following training data: X ij = [f i , f j ], for each(i, j) ∈ S × S, i = j , t i = t j ,(1)y ij = 1, t i > t j −1, t i < t j .(2) Then a classifier C is trained on the training data X ij and labels y ij . 2 It is important to note that the trained classifier may produce conflicting results: for example, I A < I B and I B < I C but I C < I A . In the Experiments section, we demonstrate the robustness of L2P to such conflicts rooted from the pairwise classification error. Placing Phase. The placing phase consists of two stages. In Stage I, for each unplaced instance q, L2P obtains X iq = [f i , f q ], for each i ∈ S (recall S is the training set). Then, L2P applies the classifier C on X iq to get the predicted pairwise relationship between the test instance q and all training instances (ŷ iq = C(X iq )). In Stage II, L2P treats each training instance as a "voter". Training instances (voters) are sorted by their target variables in descending order, dividing the target variable axis into bins. Ifŷ iq = 1, bins on the right of t i will obtain an upvote (+1) and bins on the left of t i will obtain a downvote (-1) and vice versa forŷ iq = −1. After the voting process, L2P takes the bin with the most "votes" as the predicted bin for test instance q, and obtains the predictiont q as the midpoint of this bin. Theoretical Analysis of Voting Process L2P's voting process can be viewed as the maximum likelihood estimation (MLE) of the optimal placement of an instance based on the pairwise relationships. Given the test instance q, our goal is to find its optimal bin m. For any bin b, we have: P (b|q) ∝ P (q|b) × P (b). Since each train instance i contributes to P (q|b), we have P (q|b) = 1 Z i∈Strain P i (q|b), where P i (q|b) is the conditional probability of test instance q placing in the given bin b based on its pairwise relationship with training instance i; and Z is the normalization factor, Z = b i P i (q|b). L2P assigns two probabilities to each pair of training instance i and test instance q: p l i (q) and p r i (q), denoting the probability that the test instance q is smaller than (i.e., to the left of) or larger than (i.e., to the right of) training instance i respectively, where p l i (q) + p r i (q) = 1. 3 Let R i b ∈ {l, r} be the region defined by training instance i for bin b, and |R i b | as the number of bins in this region. We know P i (q|b) = p R i b i (q)/|R i b | assuming the test instance is equally probable to fall in each bin in region R i m . 4 Therefore, the optimal bin is m = argmax b 1 Z i∈S (p R i b i (q)/|R i b |). We observe that p R i b i (q)/|R i b | is the "votes" the training instance i gives to bin b for test instance q, therefore the optimal bin m is the one with the most "votes". 5 L2P can incorporate any method that takes pairwise preferences and learns to place a test instance among the training instances. Specifically, we examined SpringRank [8], FAS-PIVOT [9] and tournament graph related heuristics [10]. We found that the performances of these approaches are quite similar to voting. However, voting -with its linear runtime complexity -is the most computationally efficient method among them. Complexity Analysis Suppose n is the number of instances in the training set. The vanilla training phase for L2P learns pairwise relationships among all pairs in the training set, leading to a O(n 2 ) runtime complexity, which is computationally prohibitive for large datasets even though the training phase is an offline process and can be easily parallelized. Thus, we implemented an efficient approach based on the intuition that it is easier for a classifier to learn the pairwise relationship between instances that are far apart than instances that are near each other. Specifically, we define two parameters: n s denoting the number of samples to compare with for each training instance and k denoting the number of instances that are considered near to each training instance. For comparison to each training instance i, L2P's efficient training phase algorithm (1) takes all k near instances of i and (2) uniformly at random samples n s − k non-near instances of i. The nearness of two instances is measured by the difference in their target values. Our experiments with the efficient implementation of L2P lead to similar AUC scores for the overall prediction task, but reduced the runtime complexity to O(n s × n), where n s << n. For instance, n s is 20% of n. L2P's placing phase has a complexity of O(n) for each new (i.e., test) instance. Algorithm 1: L2P's Training Algorithm Input: Training data S = (F, t) Output: Pairwise relationship classifier C // Feature matrix X = [ ]; // Label vector y = [ ]; for i ← 1 to |S| do for j ← i + 1 to |S| do X.append([f i , f j ]); if t i > t j then y.append(1) else if t i < t j then y.append(-1) end end C.train(X, y); return C Algorithm 2: L2P's Placing Algorithm Input: Classifier C, Training data S=(F , t), Unplaced instance q represented by its features f q Output: t q = predicted value for test instance q B = [ ]; // Unique target values, highest to lowest bins = sort(unique(t)); for i ← 1 to |bins| do B[i] = 0 end for i ← 1 to |F | dô y iq = C.predict([f i , f q ]) for j ← 1 to BinEdgeIndex(t i ) -1 do B[j] -=ŷ iq ; end for j ← BinEdgeIndex(t i ) to |B| do B[j] +=ŷ iq ; end end b = GetHighestBin(B); t q = Mean(bins[b-1], bins[b]); return t q Experiments In this section, we describe the data used in our experiments, the baseline and competing approaches, experimental methodology, evaluation metrics, and results. Datasets We present results on the following real-world applications: Book sales: This dataset consists of information about all print nonfiction and fiction books published in the United States in 2015, including features about authors publication history, book summary, authors popularity (details of features see [1]). The goal is to predict the one year book sales using features prior to the book's publication. We separate nonfiction books and fiction books in the experiment. Art auctions: This dataset combines information on artists exhibits, auction sales, and primary market quotes. It was previously used to quantify success of artists based on their trajectories of exhibitions [11]. We select 7,764 paintings using vertical logarithmic binning [12]. The features includes previous exhibition records (number of exhibitions, number of exhibitions at different grade), sales records (number of art pieces sold, various statistics of price of previous sold pieces), career length and medium information (full feature list see Supplementary Information Table 1). The prediction task is to predict auction price of an art piece based on artists' previous sale and exhibition history. Table 1 provides the summary statistics of these datasets. Specifically, we calculate the kurtosis for each target variable. Kurtosis measures the "tailedness" of the probability distribution. The kurtosis of any univariate normal distribution is 3, and the higher the kurtosis is, the heavier the tails. Complementary cumulative function (CCDF) of real outcomes are shown in Fig. 2 Baseline and Competing Approaches To compare the predictive capabilities of L2P, we experiment with these baseline approaches from the literature. K-nearest neighbors (kNN). The prediction is obtained through local interpolation of nearest neighbors in the training set. Kernel regression (KR). We employ Ridge regression with RBF kernels to estimate the non-linear relationship between variables. Heavy-tailed linear regression (HLR). Hsu and Sabato [13] proposed a heavy-tailed regression model, in which a median-of-means technique is utilized. They proved that for a d-dimensional estimator, a random sample of sizẽ O(d log(1/δ)) is sufficient to obtain a constant factor approximation to the optimal loss with probability 1 − δ. However, this approach is not able to capture non-linear relationships, which exist in our setting. Neural networks (NN). We use a multi-layer perceptron regressor model. Neural networks sacrifice interpretability for predictive accuracy. XGBoost (XGB). We use the implementation of gradient boosting optimized on various tasks such as regression and ranking [14]. LambdaMART. We compare with the well-known learning-to-rank algorithm LambdaMART 6 [16]. LambdaMART combines LambdaRank [17] and MART (Multiple Additive Regression Trees) [18]. While MART uses gradient boosted decision trees for prediction tasks, LambdaMART uses gradient boosted decision trees using a cost function derived from LambdaRank for solving a ranking task. Here we choose to optimize LambdaMART based on ranking AUC, which is an appropriate metric for our task. LambdaMART is designed to predict the ranking of a list of items; here we predict the instance by: (1) using LambdaMART to get the ranking of the new instance plus the training instances and (2) predicting the value as the midpoint of actual values of adjacent ranked instances. 7 Random (RDM). We shuffle actual outcomes at random and assign those random outcomes as predictions. Experimental Setup and Evaluation Metrics We impose standard scaling on all the columns of the data matrix and the target variable. For all competing methods, we first take the logarithm of the entries in the data matrix and the target variable, and then do standard scaling. For all methods, we employ 5-fold stratified cross-validation to estimate confidence of model performance. For all baseline models, we tune the model parameters to near-optimal performance (parameters listed in Supplementary Information Table 2). For L2P, however, we use the scikit-learn default parameters of random forest classifier, with 100 trees and Gini impurity as split criteria. Under our problem definition, the model with the best performance will (1) reproduce heavy-tailed outcome distribution and (2) predict instances accurately especially the "big and rare instances". Traditional regression metrics are not of good fit in this problem setting. For example R 2 has the assumption of normally distributed error, which is not the case for heavy-tailed distributions; root mean square error (RMSE) will be dominated by the errors on the high end since the values on high-end are extreme. Instead, we use the following metrics that are more appropriate under our circumstances for evaluation: Quantile-quantile (Q-Q) plot. Q-Q plot can visually present the deviations between true and predicted target variable distributions. A model who can reproduce the outcome distribution should produce curves closer to y = x line. We can also investigate when predicted quantiles deviate from this line. Kolmogorov-Smirnov statistic (KS) and Earth mover distance (EMD). KS statistic and EMD are two commonly used measure on the distance between two underlying probability distributions. Smaller KS and EMD indicates higher similarity between distributions; and in our analysis, it indicates a better prediction of the underlying distribution. Receiver operating characteristic (ROC). We calculate true-positive and false-positive rates in order to compute the ROC curve and the area under the ROC curve (a.k.a. AUC score). We adapt AUC calculation to regression setting by calculating the true-positive and false-positive rates at different thresholds (all possible actual values). True positive rate at threshold. The calculation of true-positive rate (TPR) at threshold follows the recall@k measure used in information retrieval literature. Here, we measure the fraction of instances with true value (y) higher than threshold t that also have predicted values (ŷ) higher than t for calculating TPR. We followed similar approach to compute FPR as in Eq.3. T P R@t = |{ŷ i ≥ t, y i ≥ t}| |{y i ≥ t}| F P R@t = |{ŷ i ≥ t, y i < t}| |{y i < t}|(3) For various thresholds, we compute corresponding TPR and FPR scores to create ROC curve. Similar to traditional ROC curves, a better performing method would have a curve that is simultaneously improving both TPRs and FPRs, leading to a perfect score of AUC = 1. A random model leads to an AUC performance of 0.5 with the corresponding ROC curve being the 45-degree line indicating that TPR and FPR are equal for various thresholds. It is important to note that each individual measure alone is not sufficient to judge the goodness of a model. KS, EMD and Q-Q plot are measuring the reproducibility of the heavy-tailed distribution, but are not able to measure the prediction accuracy for each instance. Downside of comparing distribution is that the random prediction will end up having the perfect Q-Q plot and KS = 0, EMD = 0. AUC is measuring the accuracy of the prediction, but does not address model's ability to reproduce the distribution. We aim to harness benefits of different measures to assess quality of our models. Experimental Results Next we detail performance comparisons and robustness analysis. 7 LambdaMART code is obtained from https://github.com/jma127/pyltr. Figure 3: Experimental results. We compare L2P's performance against 6 other methods (discussed in Section 3.2) across 3 datasets (see Table 1). L2P's objective is to accurately reproduce the distribution of an outcome variable and accurately predict its value. Q-Q plots (left column) show how predicted and actual quantiles align. L2P reproduces underlying distributions that are accurate at both lower and higher quantiles compared to the other methods. Kolmogorov-Smirnov (KS) statistics and Earth mover distance (EMD) (middle column) measures distance between predicted and actual distributions. The lower the values for KS and EMD, the better. L2P consistently outperforms the other methods. AUC (right column) measures prediction performance. We group methods to tiers based on the mean and standard deviation of the score. L2P achieves top tier performance all datasets. In summary, the only approach that achieves top performance on both reproducing the outcome variable distribution and accurate predictions across the datasets is L2P. Performance comparison study We compare the methods using different measurement to showcase that L2P can accurately estimate the heavy-tailed distribution and predict the value of the heavy-tailed variable. Q-Q plot. In left column in Fig. 3 shows the Q-Q plot of predicted outcomes. In all datasets we see deviation at the high-end (where "big and rare" instances reside). L2P is among the top 3 methods that produce the smallest deviation at the high end for all datasets. LambdaMART is also competitive in producing small deviation, but it produces larger deviations at the low end than L2P. KS and EMD. The second column of Fig. 3 presents the KS and EMD scores. Outcomes of L2P leads to the smallest KS statistics and second lowest EMD for almost all datasets. LambdaMART shows an advantage on minimizing EMD; however, we will show in AUC score comparison that it is not a preferable method. AUC. The third column of Fig. 3 shows the AUC score of various methods on different datasets. L2P achieves top tier performance on nonfiction, art and fiction dataset, which all have high kurtosis values. XGB and NN have competitive AUCs to L2P but are bad at reproducing the true distributions as shown previously. We notice that LambdaMART, which has good performance EMD, has very low AUC score, indicating its inability to have accurate prediction on each instance. We further investigate the possible reasons of the unsatisfying prediction of LambdaMART. Figure 4 shows the ranking results for LambdaMART on the fiction dataset, where we trained and tested on the full dataset. We found that LamdaMART did not correctly rank items that are close to each other. This failure of preserving the rank among training instances is the root cause of why LambdaMART fails to achieve good performance on the new instances prediction. LambdaMART is trained and tested on the full fiction dataset. We observe that even when trained on the same dataset, LambdaMART's ranking output is not accurate. While it exhibits general trends around the 45-degree line, the neighborhood ranking is not restored, which is the root cause of why LambdaMART fails to achieve good performance on predictions of new instances. Takeaway message. With comprehensive consideration of all three evaluations, we can see that L2P is the best method in both reproducing the underlying heavy-tailed distribution and providing accurate predictions. Robustness analysis In the placing phase of L2P there are two stages: 1) Stage I: obtaining pairwise relationships between test instance and training instances and 2) Stage II: placing the test instance and obtaining the prediction through voting. Previously we showed that voting itself is a maximum likelihood estimation, therefore the performance of L2P is highly depends on the performance of pairwise relationship learning. Here, we investigate the robustness w.r.t. classification error of pairwise relationships. To quantify the error tolerance of "voting" and estimation for a new instance, we conduct a set of experiments where we introduce errors on predicting pairwise relationships. We simulate the pairwise relationship error with two mechanisms: (i) random error: constant probability p = p c flips the label for each pair, (ii) distance-dependent error: probability of error is proportional to the true ranking percentile difference between items; here we use the percentile of the ranking because the sizes of the datasets vary. We define the flipping probability as p ij = e −α|ri−rj | , assuming it would be easier to learn the pairwise relationship for items that are further away. This is observed in our experiments as introduced to L2P's classifier for pairwise relationships. We observe significantly high tolerance towards random error and gradual degradation in L2P's overall performance with distance-dependent errors. well. For example, in nonfiction data, we notice that more than 48% of the pairwise relationship error occurs in item pairs that have a ranking percentile difference smaller than 10. We can control the rate of errors introduced by the two mechanisms by tuning p c or α. In Figure 5, we present the overall performance (AUC) of L2P when various degrees of errors are introduced to the system in the pairwise relationships. First thing to notice is that if the pairwise relationship has no error (see left panel of Figure 5 when classifier accuracy is 1), L2P has an accurate prediction, showing that the performance of the voting stage is only influenced by the quality of the pairwise relationships learned by the model. Moreover, the voting stage can actually compensate errors in pairwise relationships. We observe that error tolerance is significantly high towards random error: performance of L2P is stable until more than 45% of the pairwise relationships are mistaken. For distance-dependent mechanism to simulate errors, we observe robust performance that 30% error in stage I predictions resulting just 20% reduction of the overall performance. Model Interpretability As mentioned earlier, one of the advantages of L2P's methodology is its interpretability. Let us present the nonfiction book Why not me? by Mindy Kaling as an example. L2P prediction is about 218,000 copies while the actual sales is about 230,000. The key features explaining the success of this particular book are the author's popularity and the previous sales of author -6,228,182 pageviews and about 638,000 copies, respectively. However, performance of neural network leads to significant under-prediction of 16,000 sales and it's not clear why. L2P places Why not me? between Selp-Helf by Miranda Sings and Big Magic by Elizabeth Gilbert. Selp-Helf has the author popularity as 1,390,000 and the author has no prior publishing history, while Big Magic has the author popularity as 1,596,000 and previous sales as 6,954,000. We see that Why not me? has higher author popularity than Big Magic and Selp-Helf, but since it has a lower publishing history than Big Magic, L2P places it between these two books. We also want to demonstrate an example case where L2P fails to achieve accurate prediction. The nonfiction book The Best Loved Poems of Jacqueline Kennedy Onasis by Caroline Kennedy under Grand Central Publishing, with claimed publication year 2015, is predicted to sell 53,000 copies while the actual sales is 180 copies in the dataset. After an extensive analysis, it turns out that the book was initially published in 2001 and was a New York Times bestseller, which L2P captures its potential and predict high sales. Therefore this incorrect prediction is rooted in data error and our overprediction can be attributed to the initial editions performance as being a best-seller. Neural network predicts 7,150 copies, though closer to the actual sale 180. Objectives L2P kNN KR HLR NN XGB LambdaMART Although L2P is designed for predicting heavy-tailed outcomes, which is different from learning-to-rank, methodological contributions show some parallels with the existing ranking algorithms. Cohen et al. [10] proposed a two-phase approach that learns from preference judgments and subsequently combines multiple judgments to learn a ranked list of instances. Similarly, RankSVM [15] is a two-phase approach that translates learning weights for ranking functions into SVM classification. Both of these approaches have complexity O(n 2 ), which is computationally expensive. In experiment practice, we found that learning-to-rank is not satisfying to predict heavy-tailed distributed outcomes. Since the learning-to-rank algorithm cannot guarantee to recover the rankings of the training instances completely, it will end up producing worse prediction on the new instance. Heavy-tailed regression. Regression problems are known to suffer from under-predicting rare instances [7]. Approaches were proposed to correct fitting models consider prior correction that introduces terms capturing a fraction of rare events in the observations and weighting the data to compensate for differences [4,6]. Hsu and Sabato [13] proposed a methodology for linear regression with possibly heavy-tailed responses. They split data into multiple pieces, repeat the estimation process several times, and select the estimators based on their performance. They analytically prove that their method can perform reasonably well on heavy-tailed datasets. Quantile regression related approaches are proposed as well. Wang et al. [19] proposed estimating the intermediate conditional quantiles using conventional quantile regression and extrapolating these estimates to capture the behavior at the tail of the distribution. Robust Regression for Asymmetric Tails (RRAT) [20] was proposed to address the problem of asymmetric noise distribution by using conditional quantile estimators. Zhang and Zhou [21] considered linear regression with heavy-tail distributions and showed that using l1 loss with truncated minimization can have advantages over l2 loss. Like all truncated based approaches, their method requires prior knowledge of distributional properties. However, none of these regression techniques can capture non-linear decision boundaries. Ordinal Regression. The idea behind L2P methodology is similar to an ordinal regression where each training instance is mapped to an ordinal scale. Previous research has explored ordinal regression using binary classification [22]. The contribution of L2P is that it transforms the prediction of heavy-tailed outcomes to ordinal regression using pairwise-relationship classification followed by an MLE-based voting method. By doing this two-phase approach, L2P is able to reproduce the distribution of the outcome variable and provide accurate predictions for the outcome variable. Combining these two tasks leads consistently to better performance all-around. Imbalance Learning Data imbalance, as a common issue in machine learning, has been widely studied, especially in classification space. In [23], the problem of imbalance learning is defined as instances have different importance value based on user preference. There are in generally three categories of methods tackling this problem: data preprocessing [3,24], special-purpose learning methods [25,26] and prediction post-processing [27,28]. However, one should notice that learning heavy-tailed distributed attributes is different from imbalance learning. In most imbalance learning, there is a majority group and a minority group, but within group items are mostly homogeneous. However in heavy-tailed distribution, there is no clear cut to define majority/minority group and even if forcing a threshold to form majority/minority group, within each group, the distribution is still heavy-tailed. Additionally, one need to choose a pre-defined relevance function for a lot of methods designed in this space. Efficient algorithm for pairwise learning. Qian et al. proposed using a two-step hashing framework to retrieve relevant instances and nominate pairs whose ranking is uncertain [29]. Other approaches to efficiently searching for similar pairs and approximately learning pairwise distances are proposed in the literature for information retrieval and image search [30,31,32]. L2P can use any robust method that learns pairwise preferences for its pairwise relationships learning. Conclusions We presented the L2P algorithm, which satisfies three desired objectives consistently: (1) modeling heavy-tail distribution of an outcome variable, (2) accurately making predictions for the heavy-tailed outcome variable, and (3) producing an interpretable ML algorithm as summarized in Table 2. Through learning pairwise relationships following by an MLE-based voting method, L2P preserves the heavy-tailed nature of the outcome variables and avoids under-prediction of rare instances. We observed the following: 1. L2P accurately reproduces the heavy-tailed distribution of the outcome variable and accurately predicts of that variable. Our experimental study, which included 6 competing methods and 3 datasets, demonstrates that L2P consistently outperforms other methods across various performance measures, including accurate estimation of both lower and higher quantiles of the outcome variable distribution, lower Kolmogorov-Smirnov statistic and Earth Mover Distance, and higher AUC. 2. L2P's performance is robust when errors are introduced in the pairwise-relationship classifier. Under random error setting, L2P can achieve almost perfect performance up to 45% error in pairwise relationship predictions; and under distance-dependent error setting, L2P has an accuracy drop of only 20% with 30% pairwiserelationship error. 3. L2P is an interpretable approach and it provides prediction context. L2P allows one to investigate each prediction by comparing with neighboring instances and their corresponding feature values to obtain more context on the outcome. This is highly important to practitioners such as book publishers, where executives need reasons before making a huge investment. Supplementary Information July 8, 2021 1 Supplementary Material Reproducibility The code for Python implementation of L2P method is freely available from https://github.com/xindi-dumbledore/L2P. Codes for baseline methods for the experiments are also included. Table 1 lists the features for the art dataset and its descriptions. The data includes exhibition grade, which ranges from A (the top grade) to D (the bottom grade). To calculate average grade, we use the following assignments: A = 4, B = 3, C = 2 and D = 1. Table 2 lists the parameters used for each baseline methods across different dataset. We performed a grid search on the parameters for each baseline method and select the one that produced the best performance. Median sold price of art pieces of the same medium medium percentile90 90th percentile of price of art pieces of the same medium medium percentile75 75th percentile of price of art pieces of the same medium medium percentile25 25th percentile of price of art pieces of the same medium medium percentile10 Feature Description of Art Dataset Parameters Selected for Different Baseline Methods and Datasets 10th percentile of price of art pieces of the same medium medium std standard deviation of price of art pieces of the same medium
5,394
1908.04628
2968100992
Many real-world prediction tasks have outcome (a.k.a. target or response) variables that have characteristic heavy-tail distributions. Examples include copies of books sold, auction prices of art pieces, etc. By learning heavy-tailed distributions, big and rare'' instances (e.g., the best-sellers) will have accurate predictions. Most existing approaches are not dedicated to learning heavy-tailed distribution; thus, they heavily under-predict such instances. To tackle this problem, we introduce ( L2P ), which exploits the pairwise relationships between instances to learn from a proportionally higher number of rare instances. L2P consists of two stages. In Stage 1, L2P learns a pairwise preference classifier: . In Stage 2, L2P learns to place a new instance into an ordinal ranking of known instances. Based on its placement, the new instance is then assigned a value for its outcome variable. Experiments on real data show that L2P outperforms competing approaches in terms of accuracy and capability to reproduce heavy-tailed outcome distribution. In addition, L2P can provide an interpretable model with explainable outcomes by placing each predicted instance in context with its comparable neighbors.
In literature, efficient methodologies were proposed to learn pairwise relations more efficiently than comparing all @math pairs exhaustively. Qian proposed using two-step hashing framework to retrieve relevant instance and nominate pairs whose ranking is uncertain @cite_21 . Similar approaches to efficiently search similar pairs and approximately learning pairwise distance are proposed in the literature for information retrieval and image search @cite_25 @cite_37 @cite_3 .
{ "abstract": [ "We introduce a method that enables scalable similarity search for learned metrics. Given pairwise similarity and dissimilarity constraints between some examples, we learn a Mahalanobis distance function that captures the examples' underlying relationships well. To allow sublinear time similarity search under the learned metric, we show how to encode the learned metric parameterization into randomized locality-sensitive hash functions. We further formulate an indirect solution that enables metric learning and hashing for vector spaces whose high dimensionality makes it infeasible to learn an explicit transformation over the feature dimensions. We demonstrate the approach applied to a variety of image data sets, as well as a systems data set. The learned metrics improve accuracy relative to commonly used metric baselines, while our hashing construction enables efficient indexing with learned distances and very large databases.", "Pair wise learning to rank algorithms (such as Rank SVM) teach a machine how to rank objects given a collection of ordered object pairs. However, their accuracy is highly dependent on the abundance of training data. To address this limitation and reduce annotation efforts, the framework of active pair wise learning to rank was introduced recently. However, in such a framework the number of possible query pairs increases quadratic ally with the number of instances. In this work, we present the first scalable pair wise query selection method using a layered (two-step) hashing framework. The first step relevance hashing aims to retrieve the strongly relevant or highly ranked points, and the second step uncertainty hashing is used to nominate pairs whose ranking is uncertain. The proposed framework aims to efficiently reduce the search space of pair wise queries and can be used with any pair wise learning to rank algorithm with a linear ranking function. We evaluate our approach on large-scale real problems and show it has comparable performance to exhaustive search. The experimental results demonstrate the effectiveness of our approach, and validate the efficiency of hashing in accelerating the search of massive pair wise queries.", "Learning a measure of similarity between pairs of objects is a fundamental problem in machine learning. It stands in the core of classification methods like kernel machines, and is particularly useful for applications like searching for images that are similar to a given image or finding videos that are relevant to a given video. In these tasks, users look for objects that are not only visually similar but also semantically related to a given object. Unfortunately, current approaches for learning similarity do not scale to large datasets, especially when imposing metric constraints on the learned similarity. We describe OASIS, a method for learning pairwise similarity that is fast and scales linearly with the number of objects and the number of non-zero features. Scalability is achieved through online learning of a bilinear model over sparse representations using a large margin criterion and an efficient hinge loss cost. OASIS is accurate at a wide range of scales: on a standard benchmark with thousands of images, it is more precise than state-of-the-art methods, and faster by orders of magnitude. On 2.7 million images collected from the web, OASIS can be trained within 3 days on a single CPU. The non-metric similarities learned by OASIS can be transformed into metric similarities, achieving higher precisions than similarities that are learned as metrics in the first place. This suggests an approach for learning a metric from data that is larger by orders of magnitude than was handled before.", "In this paper, we present a new hashing method to learn compact binary codes for highly efficient image retrieval on large-scale datasets. While the complex image appearance variations still pose a great challenge to reliable retrieval, in light of the recent progress of Convolutional Neural Networks (CNNs) in learning robust image representation on various vision tasks, this paper proposes a novel Deep Supervised Hashing (DSH) method to learn compact similarity-preserving binary code for the huge body of image data. Specifically, we devise a CNN architecture that takes pairs of images (similar dissimilar) as training inputs and encourages the output of each image to approximate discrete values (e.g. +1 -1). To this end, a loss function is elaborately designed to maximize the discriminability of the output space by encoding the supervised information from the input image pairs, and simultaneously imposing regularization on the real-valued outputs to approximate the desired discrete values. For image retrieval, new-coming query images can be easily encoded by propagating through the network and then quantizing the network outputs to binary codes representation. Extensive experiments on two large scale datasets CIFAR-10 and NUS-WIDE show the promising performance of our method compared with the state-of-the-arts." ], "cite_N": [ "@cite_37", "@cite_21", "@cite_25", "@cite_3" ], "mid": [ "2144892774", "2061545541", "2131627887", "2464915613" ] }
L2P: LEARNING TO PLACE FOR ESTIMATING HEAVY-TAILED DISTRIBUTED OUTCOMES
Heavy-tailed distributions are prevalent in real world data. By heavy-tailed, we mean a variable whose distribution has a heavier tail than the exponential distribution. Many real-world applications involve predicting heavy-tailed distributed outcomes. For example, publishers want to predict a book's sales number before its publication in order to decide the advance for the author, effort in advertisement, copies printed, etc [1]. Galleries are interested in an artist's selling potential to decide whether to represent the artist or not. For inventory planning, warehouses and shops would like to know what number of each item to keep in storage [2]. All of these real-world applications involve heavy-tailed distributed outcomes -book sales, art auction prices, and demands for items. The challenge for predicting heavy-tailed outcomes lies at the tail of the distributions, i.e., the "big and rare" instances such as best-sellers, high-selling art pieces and items with huge demands. Those instances are usually the ones that attract the most interests and create the most market values. Traditional approaches tend to under-predict the rare instances at the tail of the distribution. The limiting factor for prediction performance is the insufficient amount of training data on the rare instances. Approaches tackling class imbalance problem, such as over-sampling training instances [3], adjusting weights, and adding extra constraints [4,5,6,7] do not properly address the aforementioned problem, since those approaches assume homogeneously distributed groups with proportionally different sizes. In addition, defining distinct groups on a dataset with heavy-tailed outcomes is not trivial since the distribution is continuous. Therefore, arXiv:1908.04628v2 [cs.LG] 7 Jul 2021 predicting the values of heavy-tailed variables is not merely the class imbalance problem; instead it is the problem of learning a heavy-tailed distribution. We present an approach called Learning to Place (L2P) to estimate heavy-tailed outcomes and define performance measures for heavy-tailed target variables prediction. L2P learns to estimate a heavy-tailed distribution by first learning pairwise preferences between the instances and then placing the new instance within the known instances and assigning an outcome value. Our contributions are as follows: 1. We introduce Learning to Place (L2P) to estimate heavy-tailed outcomes by (i) learning from pairwise relationships between instances and (ii) placing new instances among the training data and predicting outcome values based on those placements. 2. We present appropriate statistical metrics for measuring the performance for learning heavy-tailed distributions. 3. In an exhaustive empirical study on real-world data, we demonstrate that L2P is robust and consistently outperforms various competing approaches across diverse real-world datasets. 4. Through case studies, we demonstrate that L2P not only provides accurate predictions but also is interpretable. The outline of the paper is as follows. Section 2 presents our proposed method. Section 3 describes our experiments. Section 4 explains how our method produces models that are interpretable. Section 5 contains related work and a discussion of our findings. The paper concludes in Section 6. 2 Proposed Method: Learning to Place (L2P) L2P takes as input a data matrix where the rows are data instances (e.g., books) and the columns are features that describe each instance (e.g., author, publisher, etc). Each data instance also has a value for the predefined target variable (e.g., copies of book sold). L2P learns to map each instance's feature vector to the value for its target variable, which is the standard supervised learning setup. However, the challenges that L2P addresses are as follows. First, it learns the heavy-tailed distribution of the target variable; and thus it does not under-predict the "big and rare" instances. Second, it generates an interpretable model for the human end-user (e.g., a publisher). Figure 1 describes the training and placing phases for L2P. In the training phase, L2P learns a pairwise-relationship classifier, which predicts whether the target variable for an instance A (I A ) is greater (or less) than another instance B (I B ). To predict outcomes in the placing phase, the unplaced instance is compared with each training instance using the model learned in the training phase, generating pairwise-relationships between the unplaced instance and the training instances. These pairwise-relationships are then used as "votes" to predict the target outcome of the new instance. Figure 1: Overview of the L2P Algorithm. In the training phase, L2P learns a classifier C on the pairwise-relationship between each pair of training instances. In the placing phase, L2P applies the classifier C to obtain the pairwise relationships between unplaced instance q and all of the training instances. Then, each instance in the training set contributes to the placement of unplaced instance q by voting on bins to its left or to its right depending on the predicted relation between instances. The mid-value of the bin with the highest vote is assigned as the prediction. Training Phase. For each pair of instances i and j with feature vector f i and f j , L2P concatenates the two feature vectors X ij = [f i , f j ]. 1 If i's target variable is greater than j's, then y ij = 1; otherwise, y ij = −1 (ties are ignored in the training phase). Formally, denoting with t i the target variable for instance i and with S the set of instances in the training set, L2P generates the following training data: X ij = [f i , f j ], for each(i, j) ∈ S × S, i = j , t i = t j ,(1)y ij = 1, t i > t j −1, t i < t j .(2) Then a classifier C is trained on the training data X ij and labels y ij . 2 It is important to note that the trained classifier may produce conflicting results: for example, I A < I B and I B < I C but I C < I A . In the Experiments section, we demonstrate the robustness of L2P to such conflicts rooted from the pairwise classification error. Placing Phase. The placing phase consists of two stages. In Stage I, for each unplaced instance q, L2P obtains X iq = [f i , f q ], for each i ∈ S (recall S is the training set). Then, L2P applies the classifier C on X iq to get the predicted pairwise relationship between the test instance q and all training instances (ŷ iq = C(X iq )). In Stage II, L2P treats each training instance as a "voter". Training instances (voters) are sorted by their target variables in descending order, dividing the target variable axis into bins. Ifŷ iq = 1, bins on the right of t i will obtain an upvote (+1) and bins on the left of t i will obtain a downvote (-1) and vice versa forŷ iq = −1. After the voting process, L2P takes the bin with the most "votes" as the predicted bin for test instance q, and obtains the predictiont q as the midpoint of this bin. Theoretical Analysis of Voting Process L2P's voting process can be viewed as the maximum likelihood estimation (MLE) of the optimal placement of an instance based on the pairwise relationships. Given the test instance q, our goal is to find its optimal bin m. For any bin b, we have: P (b|q) ∝ P (q|b) × P (b). Since each train instance i contributes to P (q|b), we have P (q|b) = 1 Z i∈Strain P i (q|b), where P i (q|b) is the conditional probability of test instance q placing in the given bin b based on its pairwise relationship with training instance i; and Z is the normalization factor, Z = b i P i (q|b). L2P assigns two probabilities to each pair of training instance i and test instance q: p l i (q) and p r i (q), denoting the probability that the test instance q is smaller than (i.e., to the left of) or larger than (i.e., to the right of) training instance i respectively, where p l i (q) + p r i (q) = 1. 3 Let R i b ∈ {l, r} be the region defined by training instance i for bin b, and |R i b | as the number of bins in this region. We know P i (q|b) = p R i b i (q)/|R i b | assuming the test instance is equally probable to fall in each bin in region R i m . 4 Therefore, the optimal bin is m = argmax b 1 Z i∈S (p R i b i (q)/|R i b |). We observe that p R i b i (q)/|R i b | is the "votes" the training instance i gives to bin b for test instance q, therefore the optimal bin m is the one with the most "votes". 5 L2P can incorporate any method that takes pairwise preferences and learns to place a test instance among the training instances. Specifically, we examined SpringRank [8], FAS-PIVOT [9] and tournament graph related heuristics [10]. We found that the performances of these approaches are quite similar to voting. However, voting -with its linear runtime complexity -is the most computationally efficient method among them. Complexity Analysis Suppose n is the number of instances in the training set. The vanilla training phase for L2P learns pairwise relationships among all pairs in the training set, leading to a O(n 2 ) runtime complexity, which is computationally prohibitive for large datasets even though the training phase is an offline process and can be easily parallelized. Thus, we implemented an efficient approach based on the intuition that it is easier for a classifier to learn the pairwise relationship between instances that are far apart than instances that are near each other. Specifically, we define two parameters: n s denoting the number of samples to compare with for each training instance and k denoting the number of instances that are considered near to each training instance. For comparison to each training instance i, L2P's efficient training phase algorithm (1) takes all k near instances of i and (2) uniformly at random samples n s − k non-near instances of i. The nearness of two instances is measured by the difference in their target values. Our experiments with the efficient implementation of L2P lead to similar AUC scores for the overall prediction task, but reduced the runtime complexity to O(n s × n), where n s << n. For instance, n s is 20% of n. L2P's placing phase has a complexity of O(n) for each new (i.e., test) instance. Algorithm 1: L2P's Training Algorithm Input: Training data S = (F, t) Output: Pairwise relationship classifier C // Feature matrix X = [ ]; // Label vector y = [ ]; for i ← 1 to |S| do for j ← i + 1 to |S| do X.append([f i , f j ]); if t i > t j then y.append(1) else if t i < t j then y.append(-1) end end C.train(X, y); return C Algorithm 2: L2P's Placing Algorithm Input: Classifier C, Training data S=(F , t), Unplaced instance q represented by its features f q Output: t q = predicted value for test instance q B = [ ]; // Unique target values, highest to lowest bins = sort(unique(t)); for i ← 1 to |bins| do B[i] = 0 end for i ← 1 to |F | dô y iq = C.predict([f i , f q ]) for j ← 1 to BinEdgeIndex(t i ) -1 do B[j] -=ŷ iq ; end for j ← BinEdgeIndex(t i ) to |B| do B[j] +=ŷ iq ; end end b = GetHighestBin(B); t q = Mean(bins[b-1], bins[b]); return t q Experiments In this section, we describe the data used in our experiments, the baseline and competing approaches, experimental methodology, evaluation metrics, and results. Datasets We present results on the following real-world applications: Book sales: This dataset consists of information about all print nonfiction and fiction books published in the United States in 2015, including features about authors publication history, book summary, authors popularity (details of features see [1]). The goal is to predict the one year book sales using features prior to the book's publication. We separate nonfiction books and fiction books in the experiment. Art auctions: This dataset combines information on artists exhibits, auction sales, and primary market quotes. It was previously used to quantify success of artists based on their trajectories of exhibitions [11]. We select 7,764 paintings using vertical logarithmic binning [12]. The features includes previous exhibition records (number of exhibitions, number of exhibitions at different grade), sales records (number of art pieces sold, various statistics of price of previous sold pieces), career length and medium information (full feature list see Supplementary Information Table 1). The prediction task is to predict auction price of an art piece based on artists' previous sale and exhibition history. Table 1 provides the summary statistics of these datasets. Specifically, we calculate the kurtosis for each target variable. Kurtosis measures the "tailedness" of the probability distribution. The kurtosis of any univariate normal distribution is 3, and the higher the kurtosis is, the heavier the tails. Complementary cumulative function (CCDF) of real outcomes are shown in Fig. 2 Baseline and Competing Approaches To compare the predictive capabilities of L2P, we experiment with these baseline approaches from the literature. K-nearest neighbors (kNN). The prediction is obtained through local interpolation of nearest neighbors in the training set. Kernel regression (KR). We employ Ridge regression with RBF kernels to estimate the non-linear relationship between variables. Heavy-tailed linear regression (HLR). Hsu and Sabato [13] proposed a heavy-tailed regression model, in which a median-of-means technique is utilized. They proved that for a d-dimensional estimator, a random sample of sizẽ O(d log(1/δ)) is sufficient to obtain a constant factor approximation to the optimal loss with probability 1 − δ. However, this approach is not able to capture non-linear relationships, which exist in our setting. Neural networks (NN). We use a multi-layer perceptron regressor model. Neural networks sacrifice interpretability for predictive accuracy. XGBoost (XGB). We use the implementation of gradient boosting optimized on various tasks such as regression and ranking [14]. LambdaMART. We compare with the well-known learning-to-rank algorithm LambdaMART 6 [16]. LambdaMART combines LambdaRank [17] and MART (Multiple Additive Regression Trees) [18]. While MART uses gradient boosted decision trees for prediction tasks, LambdaMART uses gradient boosted decision trees using a cost function derived from LambdaRank for solving a ranking task. Here we choose to optimize LambdaMART based on ranking AUC, which is an appropriate metric for our task. LambdaMART is designed to predict the ranking of a list of items; here we predict the instance by: (1) using LambdaMART to get the ranking of the new instance plus the training instances and (2) predicting the value as the midpoint of actual values of adjacent ranked instances. 7 Random (RDM). We shuffle actual outcomes at random and assign those random outcomes as predictions. Experimental Setup and Evaluation Metrics We impose standard scaling on all the columns of the data matrix and the target variable. For all competing methods, we first take the logarithm of the entries in the data matrix and the target variable, and then do standard scaling. For all methods, we employ 5-fold stratified cross-validation to estimate confidence of model performance. For all baseline models, we tune the model parameters to near-optimal performance (parameters listed in Supplementary Information Table 2). For L2P, however, we use the scikit-learn default parameters of random forest classifier, with 100 trees and Gini impurity as split criteria. Under our problem definition, the model with the best performance will (1) reproduce heavy-tailed outcome distribution and (2) predict instances accurately especially the "big and rare instances". Traditional regression metrics are not of good fit in this problem setting. For example R 2 has the assumption of normally distributed error, which is not the case for heavy-tailed distributions; root mean square error (RMSE) will be dominated by the errors on the high end since the values on high-end are extreme. Instead, we use the following metrics that are more appropriate under our circumstances for evaluation: Quantile-quantile (Q-Q) plot. Q-Q plot can visually present the deviations between true and predicted target variable distributions. A model who can reproduce the outcome distribution should produce curves closer to y = x line. We can also investigate when predicted quantiles deviate from this line. Kolmogorov-Smirnov statistic (KS) and Earth mover distance (EMD). KS statistic and EMD are two commonly used measure on the distance between two underlying probability distributions. Smaller KS and EMD indicates higher similarity between distributions; and in our analysis, it indicates a better prediction of the underlying distribution. Receiver operating characteristic (ROC). We calculate true-positive and false-positive rates in order to compute the ROC curve and the area under the ROC curve (a.k.a. AUC score). We adapt AUC calculation to regression setting by calculating the true-positive and false-positive rates at different thresholds (all possible actual values). True positive rate at threshold. The calculation of true-positive rate (TPR) at threshold follows the recall@k measure used in information retrieval literature. Here, we measure the fraction of instances with true value (y) higher than threshold t that also have predicted values (ŷ) higher than t for calculating TPR. We followed similar approach to compute FPR as in Eq.3. T P R@t = |{ŷ i ≥ t, y i ≥ t}| |{y i ≥ t}| F P R@t = |{ŷ i ≥ t, y i < t}| |{y i < t}|(3) For various thresholds, we compute corresponding TPR and FPR scores to create ROC curve. Similar to traditional ROC curves, a better performing method would have a curve that is simultaneously improving both TPRs and FPRs, leading to a perfect score of AUC = 1. A random model leads to an AUC performance of 0.5 with the corresponding ROC curve being the 45-degree line indicating that TPR and FPR are equal for various thresholds. It is important to note that each individual measure alone is not sufficient to judge the goodness of a model. KS, EMD and Q-Q plot are measuring the reproducibility of the heavy-tailed distribution, but are not able to measure the prediction accuracy for each instance. Downside of comparing distribution is that the random prediction will end up having the perfect Q-Q plot and KS = 0, EMD = 0. AUC is measuring the accuracy of the prediction, but does not address model's ability to reproduce the distribution. We aim to harness benefits of different measures to assess quality of our models. Experimental Results Next we detail performance comparisons and robustness analysis. 7 LambdaMART code is obtained from https://github.com/jma127/pyltr. Figure 3: Experimental results. We compare L2P's performance against 6 other methods (discussed in Section 3.2) across 3 datasets (see Table 1). L2P's objective is to accurately reproduce the distribution of an outcome variable and accurately predict its value. Q-Q plots (left column) show how predicted and actual quantiles align. L2P reproduces underlying distributions that are accurate at both lower and higher quantiles compared to the other methods. Kolmogorov-Smirnov (KS) statistics and Earth mover distance (EMD) (middle column) measures distance between predicted and actual distributions. The lower the values for KS and EMD, the better. L2P consistently outperforms the other methods. AUC (right column) measures prediction performance. We group methods to tiers based on the mean and standard deviation of the score. L2P achieves top tier performance all datasets. In summary, the only approach that achieves top performance on both reproducing the outcome variable distribution and accurate predictions across the datasets is L2P. Performance comparison study We compare the methods using different measurement to showcase that L2P can accurately estimate the heavy-tailed distribution and predict the value of the heavy-tailed variable. Q-Q plot. In left column in Fig. 3 shows the Q-Q plot of predicted outcomes. In all datasets we see deviation at the high-end (where "big and rare" instances reside). L2P is among the top 3 methods that produce the smallest deviation at the high end for all datasets. LambdaMART is also competitive in producing small deviation, but it produces larger deviations at the low end than L2P. KS and EMD. The second column of Fig. 3 presents the KS and EMD scores. Outcomes of L2P leads to the smallest KS statistics and second lowest EMD for almost all datasets. LambdaMART shows an advantage on minimizing EMD; however, we will show in AUC score comparison that it is not a preferable method. AUC. The third column of Fig. 3 shows the AUC score of various methods on different datasets. L2P achieves top tier performance on nonfiction, art and fiction dataset, which all have high kurtosis values. XGB and NN have competitive AUCs to L2P but are bad at reproducing the true distributions as shown previously. We notice that LambdaMART, which has good performance EMD, has very low AUC score, indicating its inability to have accurate prediction on each instance. We further investigate the possible reasons of the unsatisfying prediction of LambdaMART. Figure 4 shows the ranking results for LambdaMART on the fiction dataset, where we trained and tested on the full dataset. We found that LamdaMART did not correctly rank items that are close to each other. This failure of preserving the rank among training instances is the root cause of why LambdaMART fails to achieve good performance on the new instances prediction. LambdaMART is trained and tested on the full fiction dataset. We observe that even when trained on the same dataset, LambdaMART's ranking output is not accurate. While it exhibits general trends around the 45-degree line, the neighborhood ranking is not restored, which is the root cause of why LambdaMART fails to achieve good performance on predictions of new instances. Takeaway message. With comprehensive consideration of all three evaluations, we can see that L2P is the best method in both reproducing the underlying heavy-tailed distribution and providing accurate predictions. Robustness analysis In the placing phase of L2P there are two stages: 1) Stage I: obtaining pairwise relationships between test instance and training instances and 2) Stage II: placing the test instance and obtaining the prediction through voting. Previously we showed that voting itself is a maximum likelihood estimation, therefore the performance of L2P is highly depends on the performance of pairwise relationship learning. Here, we investigate the robustness w.r.t. classification error of pairwise relationships. To quantify the error tolerance of "voting" and estimation for a new instance, we conduct a set of experiments where we introduce errors on predicting pairwise relationships. We simulate the pairwise relationship error with two mechanisms: (i) random error: constant probability p = p c flips the label for each pair, (ii) distance-dependent error: probability of error is proportional to the true ranking percentile difference between items; here we use the percentile of the ranking because the sizes of the datasets vary. We define the flipping probability as p ij = e −α|ri−rj | , assuming it would be easier to learn the pairwise relationship for items that are further away. This is observed in our experiments as introduced to L2P's classifier for pairwise relationships. We observe significantly high tolerance towards random error and gradual degradation in L2P's overall performance with distance-dependent errors. well. For example, in nonfiction data, we notice that more than 48% of the pairwise relationship error occurs in item pairs that have a ranking percentile difference smaller than 10. We can control the rate of errors introduced by the two mechanisms by tuning p c or α. In Figure 5, we present the overall performance (AUC) of L2P when various degrees of errors are introduced to the system in the pairwise relationships. First thing to notice is that if the pairwise relationship has no error (see left panel of Figure 5 when classifier accuracy is 1), L2P has an accurate prediction, showing that the performance of the voting stage is only influenced by the quality of the pairwise relationships learned by the model. Moreover, the voting stage can actually compensate errors in pairwise relationships. We observe that error tolerance is significantly high towards random error: performance of L2P is stable until more than 45% of the pairwise relationships are mistaken. For distance-dependent mechanism to simulate errors, we observe robust performance that 30% error in stage I predictions resulting just 20% reduction of the overall performance. Model Interpretability As mentioned earlier, one of the advantages of L2P's methodology is its interpretability. Let us present the nonfiction book Why not me? by Mindy Kaling as an example. L2P prediction is about 218,000 copies while the actual sales is about 230,000. The key features explaining the success of this particular book are the author's popularity and the previous sales of author -6,228,182 pageviews and about 638,000 copies, respectively. However, performance of neural network leads to significant under-prediction of 16,000 sales and it's not clear why. L2P places Why not me? between Selp-Helf by Miranda Sings and Big Magic by Elizabeth Gilbert. Selp-Helf has the author popularity as 1,390,000 and the author has no prior publishing history, while Big Magic has the author popularity as 1,596,000 and previous sales as 6,954,000. We see that Why not me? has higher author popularity than Big Magic and Selp-Helf, but since it has a lower publishing history than Big Magic, L2P places it between these two books. We also want to demonstrate an example case where L2P fails to achieve accurate prediction. The nonfiction book The Best Loved Poems of Jacqueline Kennedy Onasis by Caroline Kennedy under Grand Central Publishing, with claimed publication year 2015, is predicted to sell 53,000 copies while the actual sales is 180 copies in the dataset. After an extensive analysis, it turns out that the book was initially published in 2001 and was a New York Times bestseller, which L2P captures its potential and predict high sales. Therefore this incorrect prediction is rooted in data error and our overprediction can be attributed to the initial editions performance as being a best-seller. Neural network predicts 7,150 copies, though closer to the actual sale 180. Objectives L2P kNN KR HLR NN XGB LambdaMART Although L2P is designed for predicting heavy-tailed outcomes, which is different from learning-to-rank, methodological contributions show some parallels with the existing ranking algorithms. Cohen et al. [10] proposed a two-phase approach that learns from preference judgments and subsequently combines multiple judgments to learn a ranked list of instances. Similarly, RankSVM [15] is a two-phase approach that translates learning weights for ranking functions into SVM classification. Both of these approaches have complexity O(n 2 ), which is computationally expensive. In experiment practice, we found that learning-to-rank is not satisfying to predict heavy-tailed distributed outcomes. Since the learning-to-rank algorithm cannot guarantee to recover the rankings of the training instances completely, it will end up producing worse prediction on the new instance. Heavy-tailed regression. Regression problems are known to suffer from under-predicting rare instances [7]. Approaches were proposed to correct fitting models consider prior correction that introduces terms capturing a fraction of rare events in the observations and weighting the data to compensate for differences [4,6]. Hsu and Sabato [13] proposed a methodology for linear regression with possibly heavy-tailed responses. They split data into multiple pieces, repeat the estimation process several times, and select the estimators based on their performance. They analytically prove that their method can perform reasonably well on heavy-tailed datasets. Quantile regression related approaches are proposed as well. Wang et al. [19] proposed estimating the intermediate conditional quantiles using conventional quantile regression and extrapolating these estimates to capture the behavior at the tail of the distribution. Robust Regression for Asymmetric Tails (RRAT) [20] was proposed to address the problem of asymmetric noise distribution by using conditional quantile estimators. Zhang and Zhou [21] considered linear regression with heavy-tail distributions and showed that using l1 loss with truncated minimization can have advantages over l2 loss. Like all truncated based approaches, their method requires prior knowledge of distributional properties. However, none of these regression techniques can capture non-linear decision boundaries. Ordinal Regression. The idea behind L2P methodology is similar to an ordinal regression where each training instance is mapped to an ordinal scale. Previous research has explored ordinal regression using binary classification [22]. The contribution of L2P is that it transforms the prediction of heavy-tailed outcomes to ordinal regression using pairwise-relationship classification followed by an MLE-based voting method. By doing this two-phase approach, L2P is able to reproduce the distribution of the outcome variable and provide accurate predictions for the outcome variable. Combining these two tasks leads consistently to better performance all-around. Imbalance Learning Data imbalance, as a common issue in machine learning, has been widely studied, especially in classification space. In [23], the problem of imbalance learning is defined as instances have different importance value based on user preference. There are in generally three categories of methods tackling this problem: data preprocessing [3,24], special-purpose learning methods [25,26] and prediction post-processing [27,28]. However, one should notice that learning heavy-tailed distributed attributes is different from imbalance learning. In most imbalance learning, there is a majority group and a minority group, but within group items are mostly homogeneous. However in heavy-tailed distribution, there is no clear cut to define majority/minority group and even if forcing a threshold to form majority/minority group, within each group, the distribution is still heavy-tailed. Additionally, one need to choose a pre-defined relevance function for a lot of methods designed in this space. Efficient algorithm for pairwise learning. Qian et al. proposed using a two-step hashing framework to retrieve relevant instances and nominate pairs whose ranking is uncertain [29]. Other approaches to efficiently searching for similar pairs and approximately learning pairwise distances are proposed in the literature for information retrieval and image search [30,31,32]. L2P can use any robust method that learns pairwise preferences for its pairwise relationships learning. Conclusions We presented the L2P algorithm, which satisfies three desired objectives consistently: (1) modeling heavy-tail distribution of an outcome variable, (2) accurately making predictions for the heavy-tailed outcome variable, and (3) producing an interpretable ML algorithm as summarized in Table 2. Through learning pairwise relationships following by an MLE-based voting method, L2P preserves the heavy-tailed nature of the outcome variables and avoids under-prediction of rare instances. We observed the following: 1. L2P accurately reproduces the heavy-tailed distribution of the outcome variable and accurately predicts of that variable. Our experimental study, which included 6 competing methods and 3 datasets, demonstrates that L2P consistently outperforms other methods across various performance measures, including accurate estimation of both lower and higher quantiles of the outcome variable distribution, lower Kolmogorov-Smirnov statistic and Earth Mover Distance, and higher AUC. 2. L2P's performance is robust when errors are introduced in the pairwise-relationship classifier. Under random error setting, L2P can achieve almost perfect performance up to 45% error in pairwise relationship predictions; and under distance-dependent error setting, L2P has an accuracy drop of only 20% with 30% pairwiserelationship error. 3. L2P is an interpretable approach and it provides prediction context. L2P allows one to investigate each prediction by comparing with neighboring instances and their corresponding feature values to obtain more context on the outcome. This is highly important to practitioners such as book publishers, where executives need reasons before making a huge investment. Supplementary Information July 8, 2021 1 Supplementary Material Reproducibility The code for Python implementation of L2P method is freely available from https://github.com/xindi-dumbledore/L2P. Codes for baseline methods for the experiments are also included. Table 1 lists the features for the art dataset and its descriptions. The data includes exhibition grade, which ranges from A (the top grade) to D (the bottom grade). To calculate average grade, we use the following assignments: A = 4, B = 3, C = 2 and D = 1. Table 2 lists the parameters used for each baseline methods across different dataset. We performed a grid search on the parameters for each baseline method and select the one that produced the best performance. Median sold price of art pieces of the same medium medium percentile90 90th percentile of price of art pieces of the same medium medium percentile75 75th percentile of price of art pieces of the same medium medium percentile25 25th percentile of price of art pieces of the same medium medium percentile10 Feature Description of Art Dataset Parameters Selected for Different Baseline Methods and Datasets 10th percentile of price of art pieces of the same medium medium std standard deviation of price of art pieces of the same medium
5,394
1908.03477
2968848930
We address the problem of cross-modal fine-grained action retrieval between text and video. Cross-modal retrieval is commonly achieved through learning a shared embedding space, that can indifferently embed modalities. In this paper, we propose to enrich the embedding by disentangling parts-of-speech (PoS) in the accompanying captions. We build a separate multi-modal embedding space for each PoS tag. The outputs of multiple PoS embeddings are then used as input to an integrated multi-modal space, where we perform action retrieval. All embeddings are trained jointly through a combination of PoS-aware and PoS-agnostic losses. Our proposal enables learning specialised embedding spaces that offer multiple views of the same embedded entities. We report the first retrieval results on fine-grained actions for the large-scale EPIC dataset, in a generalised zero-shot setting. Results show the advantage of our approach for both video-to-text and text-to-video action retrieval. We also demonstrate the benefit of disentangling the PoS for the generic task of cross-modal video retrieval on the MSR-VTT dataset.
Recently, neural networks trained with a ranking loss considering image pairs @cite_13 , triplets @cite_26 , quadruplets @cite_31 or beyond @cite_2 , have been considered for metric learning @cite_4 @cite_26 and for a broad range of search tasks such as face person identification @cite_20 @cite_31 @cite_28 @cite_1 or instance retrieval @cite_27 @cite_13 . These learning-to-rank approaches have been generalised to two or more modalities. Standard examples include building a joint embedding for images and text @cite_3 @cite_25 , videos and audio @cite_10 and, more related to our work, for videos and action labels @cite_35 , videos and text @cite_40 @cite_8 @cite_21 or some of those combined @cite_16 @cite_0 @cite_33 .
{ "abstract": [ "We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the art results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.", "Querying with an example image is a simple and intuitive interface to retrieve information from a visual database. Most of the research in image retrieval has focused on the task of instance-level image retrieval, where the goal is to retrieve images that contain the same object instance as the query image. In this work we move beyond instance-level retrieval and consider the task of semantic image retrieval in complex scenes, where the goal is to retrieve images that share the same semantics as the query image. We show that, despite its subjective nature, the task of semantically ranking visual scenes is consistently implemented across a pool of human annotators. We also show that a similarity based on human-annotated region-level captions is highly correlated with the human ranking and constitutes a good computable surrogate. Following this observation, we learn a visual embedding of the images where the similarity in the visual space is correlated with their semantic similarity surrogate. We further extend our model to learn a joint embedding of visual and textual cues that allows one to query the database using a text modifier in addition to the query image, adapting the results to the modifier. Finally, our model can ground the ranking decisions by showing regions that contributed the most to the similarity between pairs of images, providing a visual explanation of the similarity.", "", "The increasing amount of online videos brings several opportunities for training self-supervised neural networks. The creation of large scale datasets of videos such as the YouTube-8M allows us to deal with this large amount of data in manageable way. In this work, we find new ways of exploiting this dataset by taking advantage of the multi-modal information it provides. By means of a neural network, we are able to create links between audio and visual documents, by projecting them into a common region of the feature space, obtaining joint audio-visual embeddings. These links are used to retrieve audio samples that fit well to a given silent video, and also to retrieve images that match a given a query audio. The results in terms of Recall@K obtained over a subset of YouTube-8M videos show the potential of this unsupervised approach for cross-modal feature learning. We train embeddings for both scales and assess their quality in a retrieval problem, formulated as using the feature extracted from one modality to retrieve the most similar videos based on the features computed in the other modality.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", "Despite a recent push towards large-scale object recognition, activity recognition remains limited to narrow domains and small vocabularies of actions. In this paper, we tackle the challenge of recognizing and describing activities in-the-wild''. We present a solution that takes a short video clip and outputs a brief sentence that sums up the main activity in the video, such as the actor, the action and its object. Unlike previous work, our approach works on out-of-domain actions: it does not require training videos of the exact activity. If it cannot find an accurate prediction for a pre-trained model, it finds a less specific answer that is also plausible from a pragmatic standpoint. We use semantic hierarchies learned from the data to help to choose an appropriate level of generalization, and priors learned from Web-scale natural language corpora to penalize unlikely combinations of actors actions objects, we also use a Web-scale language model to fill in'' novel verbs, i.e. when the verb does not appear in the training set. We evaluate our method on a large YouTube corpus and demonstrate it is able to generate short sentence descriptions of video clips better than baseline approaches.", "Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks.", "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.", "In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.", "We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com Deep-Image-Retrieval.", "", "Our objective is video retrieval based on natural language queries. In addition, we consider the analogous problem of retrieving sentences or generating descriptions given an input video. Recent work has addressed the problem by embedding visual and textual inputs into a common space where semantic similarities correlate to distances. We also adopt the embedding approach, and make the following contributions: First, we utilize web image search in sentence embedding process to disambiguate fine-grained visual concepts. Second, we propose embedding models for sentence, image, and video inputs whose parameters are learned simultaneously. Finally, we show how the proposed model can be applied to description generation. Overall, we observe a clear improvement over the state-of-the-art methods in the video and sentence retrieval tasks. In description generation, the performance level is comparable to the current state-of-the-art, although our embeddings were trained for the retrieval tasks.", "This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a largemargin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and textto-image retrieval. Our method achieves new state-of-theart results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.", "Joint understanding of video and language is an active research area with many applications. Prior work in this domain typically relies on learning text-video embeddings. One difficulty with this approach, however, is the lack of large-scale annotated video-caption datasets for training. To address this issue, we aim at learning text-video embeddings from heterogeneous data sources. To this end, we propose a Mixture-of-Embedding-Experts (MEE) model with ability to handle missing input modalities during training. As a result, our framework can learn improved text-video embeddings simultaneously from image and video datasets. We also show the generalization of MEE to other input modalities such as face descriptors. We evaluate our method on the task of video retrieval and report results for the MPII Movie Description and MSR-VTT datasets. The proposed MEE model demonstrates significant improvements and outperforms previously reported methods on both text-to-video and video-to-text retrieval tasks. Code is available at: this https URL", "", "Constructing a joint representation invariant across different modalities (e.g., video, language) is of significant importance in many multimedia applications. While there are a number of recent successes in developing effective image-text retrieval methods by learning joint representations, the video-text retrieval task, however, has not been explored to its fullest extent. In this paper, we study how to effectively utilize available multimodal cues from videos for the cross-modal video-text retrieval task. Based on our analysis, we propose a novel framework that simultaneously utilizes multi-modal features (different visual characteristics, audio inputs, and text) by a fusion strategy for efficient retrieval. Furthermore, we explore several loss functions in training the embedding and propose a modified pairwise ranking loss for the task. Experiments on MSVD and MSR-VTT datasets demonstrate that our method achieves significant performance gain compared to the state-of-the-art approaches.", "Person re-identification (ReID) is an important task in wide area video surveillance which focuses on identifying people across different cameras. Recently, deep learning networks with a triplet loss become a common framework for person ReID. However, the triplet loss pays main attentions on obtaining correct orders on the training set. It still suffers from a weaker generalization capability from the training set to the testing set, thus resulting in inferior performance. In this paper, we design a quadruplet loss, which can lead to the model output with a larger inter-class variation and a smaller intra-class variation compared to the triplet loss. As a result, our model has a better generalization ability and can achieve a higher performance on the testing set. In particular, a quadruplet deep network using a margin-based online hard negative mining is proposed based on the quadruplet loss for the person ReID. In extensive experiments, the proposed network outperforms most of the state-of-the-art algorithms on representative datasets which clearly demonstrates the effectiveness of our proposed method.", "Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes." ], "cite_N": [ "@cite_35", "@cite_3", "@cite_2", "@cite_10", "@cite_20", "@cite_4", "@cite_8", "@cite_21", "@cite_26", "@cite_28", "@cite_27", "@cite_40", "@cite_16", "@cite_25", "@cite_33", "@cite_1", "@cite_0", "@cite_31", "@cite_13" ], "mid": [ "2908138876", "2744926832", "", "2783457476", "2096733369", "2963775347", "2142900973", "877909479", "1975517671", "2598634450", "2340690086", "2890443664", "2490414731", "2963389687", "2796207103", "", "2808399042", "2606377603", "2963125676" ] }
Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings
With the onset of the digital age, millions of hours of video are being recorded and searching this data is becoming a monumental task. It is even more tedious when searching shifts from video-level labels, such as 'dancing' or 'skiing', to short action segments like 'cracking eggs' or 'tightening a screw'. In this paper, we focus on the latter and refer to them as fine-grained actions. We thus explore the task of fine-grained action retrieval where both queries and retrieved results can be either a video sequence, or a textual caption describing the fine-grained action. Such free-form action descriptions allow for a more subtle characterisation of actions but require going beyond training a classifier on a predefined set of action labels [20,30]. As is common in cross-modal search tasks [26,36], we learn a shared embedding space onto which we project both videos and captions. By nature, fine-grained actions can Figure 1. We target fine-grained action retrieval. Action captions are broken using part-of-speech (PoS) parsing. We create separate embedding spaces for the relevant PoS (e.g. Noun or Verb) and then combine these embeddings into a shared embedding space for action retrieval (best viewed in colour). be described by an actor, an act and the list of objects involved in the interaction. We thus propose to learn a separate embedding for each part-of-speech (PoS), such as for instance verbs, nouns or adjectives. This is illustrated in Fig. 1 for two PoS (verbs and nouns). When embedding verbs solely, relevant entities are those that share the same verb/act regardless of the nouns/objects used. Conversely, for a PoS embedding focusing on nouns, different actions performed on the same object are considered relevant entities. This enables a PoS-aware embedding, specialised for retrieving a variety of relevant entities, given that PoS. The outputs from the multiple PoS embedding spaces are then combined within an encoding module that produces the final action embedding. We train our approach end-to-end, jointly optimising the multiple PoS embeddings and the final fine-grained action embedding. This approach has a number of advantages over training a single embedding space as is standardly done [7,8,15,22,24]. Firstly, this process builds different embeddings that can be seen as different views of the data, which contribute to the final goal in a collaborative manner. Secondly, it allows to inject, in a principled way, additional information but without requiring additional annotation, as parsing a caption for PoS is done automatically. Finally, when considering a single PoS at a time, for instance verbs, the cor-responding PoS-embedding learns to generalise across the variety of actions involving each verb (e.g. the many ways 'open' can be used). This generalisation is key to tackling more actions including new ones not seen during training. We present the first retrieval results for the recent largescale EPIC dataset [6] (Sec 4.1), utilising the released freeform narrations, previously unexplored for this dataset, as our supervision. Additionally, we show that our second contribution, learning PoS-aware embeddings, is also valuable for general video retrieval by reporting results on the MSR-VTT dataset [39] (Sec. 4.2). Method Our aim is to learn representations suitable for crossmodal search where the query modality is different from the target modality. Specifically, we use video sequences with textual captions/descriptions and perform video-to-text (vt) or text-to-video (tv) retrieval tasks. Additionally, we would like to make sure that classical search (where the query and the retrieved results have the same modalities) could still be performed in that representation space. The latter are referred to as video-to-video (vv) and text-to-text (tt) search tasks. As discussed in the previous section, several possibilities exist, the most common being embedding both modalities in a shared space such that, regardless of the modality, the representation of two relevant entities in that space are close to each other, while the representation of two nonrelevant entities are far apart. We first describe how to build such a joint embedding between two modalities, enforcing both cross-modal and within-modal constraints (Sec. 3.1). Then, based on the knowledge that different parts of the caption encode different aspects of an action, we describe how to leverage this information and build several disentangled Part of Speech embeddings (Sec. 3.2). Finally, we propose a unified representation well-suited for fine-grained action retrieval (Sec. 3.3). Multi-Modal Embedding Network (MMEN) This section describes a Multi-Modal Embedding Network (MMEN) that encodes the video sequence and the text caption into a common descriptor space. Let {(v i , t i )|v i ∈ V, t i ∈ T } be a set of videos with v i being the visual representation of the i th video sequence and t i the corresponding textual caption. Our aim is to learn two embedding functions f : V → Ω and g : T → Ω, such that f (v i ) and g(t i ) are close in the embedded space Ω. Note that f and g can be linear projection matrices or more complex functions e.g. deep neural networks. We denote the parameters of the embedding functions f and g by θ f and θ g respectively, and we learn them jointly with a weighted combination of two cross-modal (L v,t , L t,v ) and two within-modal (L v,v , L t,t ) triplet losses. Note that other point-wise, pairwise or list-wise losses can also be considered as alternatives to the triplet loss. The cross-modal losses are crucial to the task and en-sure that the representations of a query and a relevant item for that query from a different modality are closer than the representations of this query and a non-relevant item. We use cross-modal triplet losses [19,36]: L v,t (θ) = (i,j,k)∈Tv,t max γ + d(f vi , g tj ) − d(f vi , g t k ), 0 T v,t = {(i, j, k) | v i ∈ V, t j ∈ T i+ , t k ∈ T i− } (1) L t,v (θ) = (i,j,k)∈Tt,v max γ + d(g ti , f vj ) − d(g ti , f v k ), 0 T t,v = {(i, j, k) | t i ∈ T, v j ∈ V i+ , v k ∈ V i− }(2) where γ is a constant margin, θ = [θ f , θ g ], and d(.) is the distance function in the embedded space Ω. T i+ , T i− respectively define sets of relevant and non relevant captions and V i+ , V i− the sets of relevant and non relevant videos sequences for the multi-modal object (v i , t i ). To simplify the notation, f vi denotes f (v i ) ∈ Ω and g tj denotes g(t j ) ∈ Ω. Additionally, within-modal losses, also called structure preserving losses [19,36], ensure that the neighbourhood structure within each modality is preserved in the newly built joint embedding space. Formally, L v,v (θ) = (i,j,k)∈Tv,v max γ + d(f vi , f vj ) − d(f vi , f v k ), 0 T v,v = {(i, j, k) | v i ∈ V, v j ∈ V i+ , v k ∈ V i− } (3) L t,t (θ) = (i,j,k)∈Tt,t max γ + d(g ti , g tj ) − d(g ti , g t k ), 0 T t,t = {(i, j, k) | t i ∈ T, t j ∈ T i+ , t k ∈ T i− }(4) using the same notation as before. The final loss used for the MMEN network is a weighted combination of these four losses, summed over all triplets in T defined as follows: L(θ) = λ v,v L v,v + λ v,t L v,t + λ t,v L t,v + λ t,t L t,t (5) where λ is a weighting for each loss term. Disentangled Part of Speech Embeddings The previous section described the generic Multi-Modal Embedding Network (MMEN). In this section, we propose to disentangle different caption components so each component is encoded independently in its own embedding space. To do this, we first break down the text caption into different PoS tags. For example, the caption "I divided the onion into pieces using wooden spoon" can be divided into verbs, [divide, using], pronouns, [I], nouns, [onion, pieces, spoon] and adjectives, [wooden]. In our experiments, we focus on the most relevant ones for finegrained action recognition: verbs and nouns, but we explore other types for general video retrieval. We extract all words from a caption for a given PoS tag and train one MMEN to only embed these words and the video representation in the same space. We refer to it as a PoS-MMEN. To train a PoS-MMEN, we propose to adapt the notion of relevance specifically to the PoS. This has a direct impact on the sets V i+ , V i− , T i+ , T i− defined in Equations (1)-(4). For example, the caption 'cut tomato' is disentangled into the verb 'cut' and the noun 'tomato'. Consider a PoS-MMEN focusing on verb tags solely. The caption 'cut carrots' is a relevant caption as the pair share the same verb 'cut'. In another PoS-MMEN focusing on noun tags solely, the two remain irrelevant. As the relevant/irrelevant sets differ within each PoS-MMEN, these embeddings specialise to that PoS. It is important to note that, although the same visual features are used as input for all PoS-MMEN, the fact that we build one embedding space per PoS trains multiple visual embedding functions f k that can be seen as multiple views of the video sequence. PoS-Aware Unified Action Embedding The previous section describes how to extract different PoS from captions and how to build PoS-specific MMENs. These PoS-MMENs can already be used alone for PoSspecific retrieval tasks, for instance a verb-retrieval task (e.g. retrieve all videos where "cut" is relevant) or a nounretrieval task. 1 More importantly, the output of different PoS-MMENs can be combined to perform more complex tasks, including the one we are interested in, namely finegrained action retrieval. Let us denote the k th PoS-MMEN visual and textual embedding functions by f k : V → Ω k and g k : T → Ω k . We define:v i = e v (f 1 vi , f 2 vi , . . . , f K vi ) t i = e t (g 1 t 1 i , g 2 t 2 i , . . . , g K t K i )(6) where e v and e t are encoding functions that combine the outputs of the PoS-MMENs. We explore multiple pooling functions for e v and e t : concatenation, max, average -the latter two assume all Ω k share the same dimensionality. Whenv i ,t i have the same dimension, we can perform action retrieval by directly computing the distance between these representations. We instead propose to train a final PoS-agnostic MMEN that unifies the representation, leading to our final JPoSE model. Joint Part of Speech Embedding (JPoSE). Considering the PoS-aware representationsv i andt i as input and, still following our learning to rank approach, we learn the parametersθf andθĝ of the two embedding functionŝ f :V → Γ andĝ :T → Γ which project in our final embedding space Γ. We again consider this as the task of building a single MMEN with the inputsv i andt i , and follow the process described in Sec. 3.1. In other words, we train using the loss defined in Equation (5), which we denotê L here, which combines two cross-modal and two withinmodal losses using the triplets T v,t , T t,v , T v,v , T t,t formed using relevance between videos and captions based on the action retrieval task. As relevance here is not PoS-aware, we refer to this loss as PoS-agnostic. This is illustrated in Fig. 2. We learn the multiple PoS-MMENs and the final MMEN jointly with the following combined loss: L(θ, θ 1 , . . . θ K ) =L(θ) + K k=1 α k L k (θ k )(7) where α k are weighting factors,L is the PoS-agnostic loss described above and L k are the PoS-aware losses corresponding to the K PoS-MMENs. Experiments We first tackle fine-grained action retrieval on the EPIC dataset [6] (Sec. 4.1) and then the general video retrieval task on the MSR-VTT dataset [39] (Sec. 4.2). This allows us to explore two different tasks using the proposed multimodal embeddings. The large English spaCy parser [1] was used to find the Part Of Speech (PoS) tags and disentangle them in the captions of both datasets. Statistics on the most frequent PoS tags are shown in Table 1. As these statistics show, EPIC contains mainly nouns and verbs, while MSR-VTT has longer captions and more nouns. This will have an impact of the PoS chosen for each dataset when building the JPoSE model. Fine-Grained Action Retrieval on EPIC Dataset. The EPIC dataset [6] is an egocentric dataset with 32 participants cooking in their own kitchens who then narrated the actions in their native language. The narrations were translated to English but maintain the open vocabulary selected by the participants. We employ the released free-form narrations to use this dataset for fine-grained action retrieval. We follow the provided train/test splits. Note that by construction there are two test sets: Seen and Unseen, referring to whether the kitchen has been seen in the training set. We follow the terminology from [6], and note that this terminology should not be confused with the zeroshot literature which distinguishes seen/unseen classes. The actual sequences are strictly disjoint between all sets. Additionally, we train only on the many-shot examples from EPIC excluding all examples of the few shot classes from the training set. This ensures each action has more than 100 relevant videos during training and increases the number of zero-shot examples in both test sets. Building relevance sets for retrieval. The EPIC dataset offers an opportunity for fine-grained action retrieval, as the open vocabulary has been grouped into semantically relevant verb and noun classes for the action recognition challenge. For example, 'put', 'place' and 'put-down' are grouped into one class. As far as we are aware, this paper presents the first attempt to use the open vocabulary narrations released to the community. We determine retrieval relevance scores from these semantically grouped verb and noun classes 2 , defined in [6]. These indicate which videos and captions should be considered related to each other. Following these semantic groups, a query 'put mug' and a video with 'place cup' in its caption are considered relevant as 'place' and 'put' share the same verb class and 'mug' and 'cup' share the same noun class. Subsequently, we define the triplets T v,t , T t,v , T v,v , T t,t used to train the MMEN models and to compute the lossL in JPoSE. When training a PoS-MMEN, two videos are considered relevant only within that PoS. Accordingly, 'put onion' and 'put mug' are relevant for verb retrieval, whereas, 'put cup' and 'take mug' are for noun retrieval. The corresponding PoS-based relevances define the triplets T k for L k . Experimental Details Video features. We extract flow and appearance features using the TSN BNInception model [37] Text features. We map each lemmatised word to its feature vector using a 100-dimension Word2Vec model, trained on the Wikipedia corpus. Multiple word vectors with the same part of speech were aggregated by averaging. We also experimented with the pre-trained 300-dimension Glove model, and found the results to be similar. Architecture details. We implement f k and g k in each MMEN as a 2 layer perceptron (fully connected layers) with ReLU. Additionally, the input vectors and output vectors are L2 normalised. In all cases, we set the dimension of the embedding space to 256, a dimension we found to be suitable across all settings. We use a single layer perceptron with shared weights forf andĝ that we initialise with PCA. Training details. The triplet weighting parameters are set to λ v,v = λ t,t = 0.1 and λ v,t = λ t,v = 1.0 and the loss weightings α k are set to 1. The embedding models were implemented in Python using the Tensorflow library. We trained the models with an Adam solver and a learning rate of 1e −5 , considering batch sizes of 256, where for each query we sample 100 random triplets from the corresponding T v,t , T t,v , T v,v , T t,t sets. The training in general converges after a few thousand iterations, we report all results after 4000 iterations. Evaluation metrics. We report mean average precision (mAP), i.e. for each query we consider the average precision over all relevant elements and take the mean over all queries. We consider each element in the test set as a query in turns. When reporting within-modal retrieval mAP, the corresponding item (video or caption) is removed from the test set for that query. Results First, we consider cross-modal and within-modal finegrained action retrieval. Then, we present an ablation study as well as qualitative results to get more insights. Finally we show that our approach is well-suited for zero-shot settings. These models are also compared to standard baselines. The Random Baseline randomly ranks all the database items, providing a lower bound on the mAP scores. The CCA-baseline applies Canonical Correlation Analysis to both modalities v i and t i to find a joint embedding space for cross-modal retrieval [9]. Finally, Features (Word2Vec) and Features (Video), which are only defined for withinmodal retrieval (i.e. vv and tt), show the performance when we directly use the video representation v i or the averaged Word2Vec caption representation t i . Cross-modal retrieval. Table 11 presents cross-modal results for fine-grained action retrieval. The main observation is that the proposed JPoSE outperforms all the MMEN variants and the baselines for both video-to-text (vt) and textto-video retrieval (tv), on both test sets. We also note that MMEN ([Verb, Noun]) outperforms other MMEN variants, showing the benefit of learning specialised embeddings. Yet the full JPoSE is crucial to get the best results. Within-modal retrieval. Table 12 shows the withinmodal retrieval results for both text-to-text (tt) and videoto-video (vv) retrieval. Again, JPoSE outperforms all the flavours of MMEN on both test sets. This shows that by learning a cross-modal embedding we inject information from the other modality that helps to better disambiguate and hence to improve the search. Ablation study. We evaluate the role of the components of the proposed JPoSE model, for both cross-modal and within-modal retrieval. Table 4 reports results comparing different options for the encoding functions e v and e t in addition to learning the model jointly both with and without learned functionsf andĝ. This confirms that the proposed approach is the best option. In the supplementary material, we also compare the performance when using the closed vocabulary classes from EPIC to learn the embedding. Table 5 shows the zero-shot (ZS) counts in both test sets. In total 12% of the videos in both test sets are zero-shot instances. We separate cases where the noun is present in the training set but the verb is not, denoted by ZSV (zero-shot verb), from ZSN (zero-shot noun) where the verb is present but not the noun. Cross-modal ZS retrieval results for this interesting setting are shown in Table 6. We compare JPoSE to MMEN (Caption) and baselines. Results show that the proposed JPoSE model clearly improves over these zero-shot settings, thanks to the different views captured by the multiple PoS embeddings, specialised to acts and objects. Qualitative results. Fig. 3 illustrates both video-to-text and text-to-video retrieval. For several queries, it shows the relevance of the top-50 retrieved items (relevant in green, nonrelevant in grey). Fig. 4 illustrates our motivation that disentangling PoS embeddings would learn different visual functions. It presents maximum activation examples on chosen neurons within f i for both verb and noun embeddings. Each cluster represents the 9 videos that respond maximally to one of these neurons 3 . We can remark that noun activations indeed correspond to objects of shared appearance occurring in different actions (in the figure, chopping boards in one and cutlery in the second), while verb embedding neuron General Video Retrieval on MSR-VTT Dataset. We select MSR-VTT [39] as a public dataset for general video retrieval. Originally used for video captioning, this large-scale video understanding dataset is increasingly evaluated for video-to-text and text-to-video retrieval [8,22,24,41,23]. We follow the code and setup of [22] using the same train/test split that includes 7,656 training videos each with 20 different captions describing the scene and 1000 test videos with one caption per video. We also follow the evaluation protocol in [22] and compute recall@k (R@K) and median rank (MR). In contrast to the EPIC dataset, there is no semantic groupings of the captions in MSR-VTT. Each caption is considered relevant only for a single video, and two captions describing different videos are considered irrelevant even if they share semantic similarities. Furthermore, disentangling captions yields further semantic similarities. For example, "A cooking tutorial" and "A person is cooking", for a verb-MMEN, will be considered irrelevant as they belong to different videos even though they share the same single verb 'cook'. Consequently, we can not directly apply JPoSE as proposed in Sec. 3.3. Instead, we adapt JPoSE to this problem as follows. We use the Mixture-of-Expert Embeddings (MEE) model from [22], as our core MMEN network. In Table 7. MSR-VTT Video-Caption Retrieval results. *We include results from [22], only available for Text-to-Video retrieval. fact, MEE is a form of multi-modal embedding network in that it embeds videos and captions into the same space. We instead focus on assessing whether disentangling PoS and learning multiple PoS-aware embeddings produce better results. In this adapted JPoSE we encode the output of the disentangled PoS-MMENs with e v and e t (i.e. concatenated) and use NetVLAD [3] to aggregate Word2Vec representations. Instead of the combined loss in Equation (7), we use the pair loss, used also in [22]: L(θ) = 1 B B i j =i max γ + d(f vi , g ti ) − d(f vi , g tj ), 0 + max γ + d(f vi , g ti ) − d(f vj , g ti ), 0(8) This same loss is used when we train different MMENs. Visual and text features. We use appearance, flow, audio and facial pre-extracted visual features provided from [22]. For the captions, we extract the encodings ourselves 4 using the same Word2Vec model as for EPIC. Results. We report on video-to-text and text-to-video retrieval on MSR-VTT in Table 7 for the standard baselines and several MMEN variants. Comparing MMENs, we note that nouns are much more informative than verbs for this retrieval task. MMEN results with other PoS tags (shown in the supplementary) are even lower, indicating that they are 4 Note that this explains the difference between the results reported in [22] (shown in the first row of the Table 7) and MMEN (Caption). not informative alone. Building on these findings, we report results of a JPoSE combining two MMENs, one for nouns, and one for the remainder of the caption (Caption\Noun). Our adapted JPoSE model consistently outperforms fullcaption single embedding for both video-to-text and textto-video retrieval. We report other PoS disentanglement results in supplementary material. Qualitative results. Figure 5 shows qualitative results comparing using the full caption and JPoSE noting the disentangled model's ability to commonly rank videos closer to their corresponding captions. Conclusion We have proposed a method for fine-grained action retrieval. By learning distinct embeddings for each PoS, our model is able to combine these in a principal manner and to create a space suitable for action retrieval, outperforming approaches which learn such a space through captions alone. We tested our method on a fine-grained action retrieval dataset, EPIC, using the open vocabulary labels. Our results demonstrate the ability for the method to generalise to zero-shot cases. Additionally, we show the applicability of the notion of disentangling the caption for the general video-retrieval task on MSR-VTT. Table 9. Noun retrieval task results on the seen test set of EPIC-Kitchens. Supplementary Material A. Individual Part-of-Speech Retrieval (Sec. 3 .3) In the main manuscript, we report results on the task of fine-grained action retrieval. For completion, we here present results on individual Part-of-Speech (PoS) retrieval tasks. In Table 8, we report results for fine-grained verb retrieval (i.e. only retrieve the relevant verb/action in the video). We include the standard baselines and we additionally report the results obtained by a PoS-MMEN, that is a single embedding for verbs solely. We compare this to our proposed multi-embedding JPoSE. Using JPoSE produces better (or the same) results for both cross-modal and within-modal searches. Similarly, in Table 9, we compare results for fine-grained noun retrieval (i.e. only retrieve the relevant noun/object in the video). We show similar increases in mAP over crossmodal and within-modal searches. This indicates the complementary PoS information, from the other PoS embedding as well as the PoS-aware action embedding, helps to better define the individual embedding space. B. Closed vs Open Vocabulary Embedding C. Text embedding Using RNN We provide here the results of replacing the text embedding function, g, with an RNN instead of the two layer perceptron for the MMEN method. The RNN was modelled as a Gated Recurrent Unit (GRU). Captions were capped and zero-padded to a maximum length of 15 words. Adding a layer on top of the GRU proved not to be useful. Results of the RNN in the experiments are given under the name MMEN (Caption RNN). Given the singular verb and low noun count RNNs were not tested for the individual PoS-MMENs. Cross-Modal and Within-Modal Results can be seen in Tables 11 and 12 respectively. The inclusion of the RNN sees improvements in mAP performance for tv, vv and tt compared to MMEN (caption). However, compared to MMEN ([Verb,Noun]) or JPoSE (Verb,Noun) using the entire caption still leads to worse results for both cross and within modal retrieval. D. Additional MSR-VTT Experiments (Sec. 4.2) Table 13 of this supplementary is an expanded version of Table 7 in the main paper testing a variety of different combinations for PoS. For each row, an average of 10 runs is reported. This experiment also includes the removal of the NetVLAD layer in the MMEN, substituting it with mean pooling which we label as AVG. Results show that, on their own, Determinants, Adjectives and Adpositions achieve very poor results. We also report three JPoSE disentanglement options: (Verb, Noun), (Caption\Verb, Verb) and the one in the main paper (Capiton\Noun, Noun). The table shows that the best results are achieved when nouns are disentangled from the rest of the caption. Table 13. MSR-VTT Video-Caption Retrieval results using recall@k (R@k, higher is better) and median Rank (MR, lower is better). For each row, an average of 10 runs is reported. *We include results from [22], only available for Text-to-Video retrieval.
4,495
1908.03477
2968848930
We address the problem of cross-modal fine-grained action retrieval between text and video. Cross-modal retrieval is commonly achieved through learning a shared embedding space, that can indifferently embed modalities. In this paper, we propose to enrich the embedding by disentangling parts-of-speech (PoS) in the accompanying captions. We build a separate multi-modal embedding space for each PoS tag. The outputs of multiple PoS embeddings are then used as input to an integrated multi-modal space, where we perform action retrieval. All embeddings are trained jointly through a combination of PoS-aware and PoS-agnostic losses. Our proposal enables learning specialised embedding spaces that offer multiple views of the same embedded entities. We report the first retrieval results on fine-grained actions for the large-scale EPIC dataset, in a generalised zero-shot setting. Results show the advantage of our approach for both video-to-text and text-to-video action retrieval. We also demonstrate the benefit of disentangling the PoS for the generic task of cross-modal video retrieval on the MSR-VTT dataset.
Representing text Early works in image-to-text cross-modal retrieval @cite_12 @cite_3 @cite_25 used TF-IDF as a weighted bag-of-words model for text representations (either from a word embedding model or one-hot vectors) in order to aggregate variable length text captions into a single fixed sized representation. With the advent of neural networks, works shifted to use RNNs, Gated Recurrent Units (GRU) or Long Short-Term Memory (LSTM) units to extract textual features @cite_40 or to use these models within the embedding network @cite_35 @cite_18 @cite_0 @cite_16 @cite_43 for both modalities.
{ "abstract": [ "We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the art results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.", "Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison.", "Querying with an example image is a simple and intuitive interface to retrieve information from a visual database. Most of the research in image retrieval has focused on the task of instance-level image retrieval, where the goal is to retrieve images that contain the same object instance as the query image. In this work we move beyond instance-level retrieval and consider the task of semantic image retrieval in complex scenes, where the goal is to retrieve images that share the same semantics as the query image. We show that, despite its subjective nature, the task of semantically ranking visual scenes is consistently implemented across a pool of human annotators. We also show that a similarity based on human-annotated region-level captions is highly correlated with the human ranking and constitutes a good computable surrogate. Following this observation, we learn a visual embedding of the images where the similarity in the visual space is correlated with their semantic similarity surrogate. We further extend our model to learn a joint embedding of visual and textual cues that allows one to query the database using a text modifier in addition to the query image, adapting the results to the modifier. Finally, our model can ground the ranking decisions by showing regions that contributed the most to the similarity between pairs of images, providing a visual explanation of the similarity.", "Constructing a joint representation invariant across different modalities (e.g., video, language) is of significant importance in many multimedia applications. While there are a number of recent successes in developing effective image-text retrieval methods by learning joint representations, the video-text retrieval task, however, has not been explored to its fullest extent. In this paper, we study how to effectively utilize available multimodal cues from videos for the cross-modal video-text retrieval task. Based on our analysis, we propose a novel framework that simultaneously utilizes multi-modal features (different visual characteristics, audio inputs, and text) by a fusion strategy for efficient retrieval. Furthermore, we explore several loss functions in training the embedding and propose a modified pairwise ranking loss for the task. Experiments on MSVD and MSR-VTT datasets demonstrate that our method achieves significant performance gain compared to the state-of-the-art approaches.", "Learning a joint language-visual embedding has a number of very appealing properties and can result in variety of practical application, including natural language image video annotation and search. In this work, we study three different joint language-visual neural network model architectures. We evaluate our models on large scale LSMDC16 movie dataset for two tasks: 1) Standard Ranking for video annotation and retrieval 2) Our proposed movie multiple-choice test. This test facilitate automatic evaluation of visual-language models for natural language video annotation based on human activities. In addition to original Audio Description (AD) captions, provided as part of LSMDC16, we collected and will make available a) manually generated re-phrasings of those captions obtained using Amazon MTurk b) automatically generated human activity elements in \"Predicate + Object\" (PO) phrases based on \"Knowlywood\", an activity knowledge mining model. Our best model archives Recall@10 of 19.2 on annotation and 18.9 on video retrieval tasks for subset of 1000 samples. For multiple-choice test, our best model achieve accuracy 58.11 over whole LSMDC16 public test-set.", "", "Our objective is video retrieval based on natural language queries. In addition, we consider the analogous problem of retrieving sentences or generating descriptions given an input video. Recent work has addressed the problem by embedding visual and textual inputs into a common space where semantic similarities correlate to distances. We also adopt the embedding approach, and make the following contributions: First, we utilize web image search in sentence embedding process to disambiguate fine-grained visual concepts. Second, we propose embedding models for sentence, image, and video inputs whose parameters are learned simultaneously. Finally, we show how the proposed model can be applied to description generation. Overall, we observe a clear improvement over the state-of-the-art methods in the video and sentence retrieval tasks. In description generation, the performance level is comparable to the current state-of-the-art, although our embeddings were trained for the retrieval tasks.", "This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a largemargin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and textto-image retrieval. Our method achieves new state-of-theart results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.", "This paper studies the problem of associating images with descriptive sentences by embedding them in a common latent space. We are interested in learning such embeddings from hundreds of thousands or millions of examples. Unfortunately, it is prohibitively expensive to fully annotate this many training images with ground-truth sentences. Instead, we ask whether we can learn better image-sentence embeddings by augmenting small fully annotated training sets with millions of images that have weak and noisy annotations (titles, tags, or descriptions). After investigating several state-of-the-art scalable embedding methods, we introduce a new algorithm called Stacked Auxiliary Embedding that can successfully transfer knowledge from millions of weakly annotated images to improve the accuracy of retrieval-based image description." ], "cite_N": [ "@cite_35", "@cite_18", "@cite_3", "@cite_0", "@cite_43", "@cite_40", "@cite_16", "@cite_25", "@cite_12" ], "mid": [ "2908138876", "1527575280", "2744926832", "2808399042", "2526286384", "2890443664", "2490414731", "2963389687", "92662927" ] }
Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings
With the onset of the digital age, millions of hours of video are being recorded and searching this data is becoming a monumental task. It is even more tedious when searching shifts from video-level labels, such as 'dancing' or 'skiing', to short action segments like 'cracking eggs' or 'tightening a screw'. In this paper, we focus on the latter and refer to them as fine-grained actions. We thus explore the task of fine-grained action retrieval where both queries and retrieved results can be either a video sequence, or a textual caption describing the fine-grained action. Such free-form action descriptions allow for a more subtle characterisation of actions but require going beyond training a classifier on a predefined set of action labels [20,30]. As is common in cross-modal search tasks [26,36], we learn a shared embedding space onto which we project both videos and captions. By nature, fine-grained actions can Figure 1. We target fine-grained action retrieval. Action captions are broken using part-of-speech (PoS) parsing. We create separate embedding spaces for the relevant PoS (e.g. Noun or Verb) and then combine these embeddings into a shared embedding space for action retrieval (best viewed in colour). be described by an actor, an act and the list of objects involved in the interaction. We thus propose to learn a separate embedding for each part-of-speech (PoS), such as for instance verbs, nouns or adjectives. This is illustrated in Fig. 1 for two PoS (verbs and nouns). When embedding verbs solely, relevant entities are those that share the same verb/act regardless of the nouns/objects used. Conversely, for a PoS embedding focusing on nouns, different actions performed on the same object are considered relevant entities. This enables a PoS-aware embedding, specialised for retrieving a variety of relevant entities, given that PoS. The outputs from the multiple PoS embedding spaces are then combined within an encoding module that produces the final action embedding. We train our approach end-to-end, jointly optimising the multiple PoS embeddings and the final fine-grained action embedding. This approach has a number of advantages over training a single embedding space as is standardly done [7,8,15,22,24]. Firstly, this process builds different embeddings that can be seen as different views of the data, which contribute to the final goal in a collaborative manner. Secondly, it allows to inject, in a principled way, additional information but without requiring additional annotation, as parsing a caption for PoS is done automatically. Finally, when considering a single PoS at a time, for instance verbs, the cor-responding PoS-embedding learns to generalise across the variety of actions involving each verb (e.g. the many ways 'open' can be used). This generalisation is key to tackling more actions including new ones not seen during training. We present the first retrieval results for the recent largescale EPIC dataset [6] (Sec 4.1), utilising the released freeform narrations, previously unexplored for this dataset, as our supervision. Additionally, we show that our second contribution, learning PoS-aware embeddings, is also valuable for general video retrieval by reporting results on the MSR-VTT dataset [39] (Sec. 4.2). Method Our aim is to learn representations suitable for crossmodal search where the query modality is different from the target modality. Specifically, we use video sequences with textual captions/descriptions and perform video-to-text (vt) or text-to-video (tv) retrieval tasks. Additionally, we would like to make sure that classical search (where the query and the retrieved results have the same modalities) could still be performed in that representation space. The latter are referred to as video-to-video (vv) and text-to-text (tt) search tasks. As discussed in the previous section, several possibilities exist, the most common being embedding both modalities in a shared space such that, regardless of the modality, the representation of two relevant entities in that space are close to each other, while the representation of two nonrelevant entities are far apart. We first describe how to build such a joint embedding between two modalities, enforcing both cross-modal and within-modal constraints (Sec. 3.1). Then, based on the knowledge that different parts of the caption encode different aspects of an action, we describe how to leverage this information and build several disentangled Part of Speech embeddings (Sec. 3.2). Finally, we propose a unified representation well-suited for fine-grained action retrieval (Sec. 3.3). Multi-Modal Embedding Network (MMEN) This section describes a Multi-Modal Embedding Network (MMEN) that encodes the video sequence and the text caption into a common descriptor space. Let {(v i , t i )|v i ∈ V, t i ∈ T } be a set of videos with v i being the visual representation of the i th video sequence and t i the corresponding textual caption. Our aim is to learn two embedding functions f : V → Ω and g : T → Ω, such that f (v i ) and g(t i ) are close in the embedded space Ω. Note that f and g can be linear projection matrices or more complex functions e.g. deep neural networks. We denote the parameters of the embedding functions f and g by θ f and θ g respectively, and we learn them jointly with a weighted combination of two cross-modal (L v,t , L t,v ) and two within-modal (L v,v , L t,t ) triplet losses. Note that other point-wise, pairwise or list-wise losses can also be considered as alternatives to the triplet loss. The cross-modal losses are crucial to the task and en-sure that the representations of a query and a relevant item for that query from a different modality are closer than the representations of this query and a non-relevant item. We use cross-modal triplet losses [19,36]: L v,t (θ) = (i,j,k)∈Tv,t max γ + d(f vi , g tj ) − d(f vi , g t k ), 0 T v,t = {(i, j, k) | v i ∈ V, t j ∈ T i+ , t k ∈ T i− } (1) L t,v (θ) = (i,j,k)∈Tt,v max γ + d(g ti , f vj ) − d(g ti , f v k ), 0 T t,v = {(i, j, k) | t i ∈ T, v j ∈ V i+ , v k ∈ V i− }(2) where γ is a constant margin, θ = [θ f , θ g ], and d(.) is the distance function in the embedded space Ω. T i+ , T i− respectively define sets of relevant and non relevant captions and V i+ , V i− the sets of relevant and non relevant videos sequences for the multi-modal object (v i , t i ). To simplify the notation, f vi denotes f (v i ) ∈ Ω and g tj denotes g(t j ) ∈ Ω. Additionally, within-modal losses, also called structure preserving losses [19,36], ensure that the neighbourhood structure within each modality is preserved in the newly built joint embedding space. Formally, L v,v (θ) = (i,j,k)∈Tv,v max γ + d(f vi , f vj ) − d(f vi , f v k ), 0 T v,v = {(i, j, k) | v i ∈ V, v j ∈ V i+ , v k ∈ V i− } (3) L t,t (θ) = (i,j,k)∈Tt,t max γ + d(g ti , g tj ) − d(g ti , g t k ), 0 T t,t = {(i, j, k) | t i ∈ T, t j ∈ T i+ , t k ∈ T i− }(4) using the same notation as before. The final loss used for the MMEN network is a weighted combination of these four losses, summed over all triplets in T defined as follows: L(θ) = λ v,v L v,v + λ v,t L v,t + λ t,v L t,v + λ t,t L t,t (5) where λ is a weighting for each loss term. Disentangled Part of Speech Embeddings The previous section described the generic Multi-Modal Embedding Network (MMEN). In this section, we propose to disentangle different caption components so each component is encoded independently in its own embedding space. To do this, we first break down the text caption into different PoS tags. For example, the caption "I divided the onion into pieces using wooden spoon" can be divided into verbs, [divide, using], pronouns, [I], nouns, [onion, pieces, spoon] and adjectives, [wooden]. In our experiments, we focus on the most relevant ones for finegrained action recognition: verbs and nouns, but we explore other types for general video retrieval. We extract all words from a caption for a given PoS tag and train one MMEN to only embed these words and the video representation in the same space. We refer to it as a PoS-MMEN. To train a PoS-MMEN, we propose to adapt the notion of relevance specifically to the PoS. This has a direct impact on the sets V i+ , V i− , T i+ , T i− defined in Equations (1)-(4). For example, the caption 'cut tomato' is disentangled into the verb 'cut' and the noun 'tomato'. Consider a PoS-MMEN focusing on verb tags solely. The caption 'cut carrots' is a relevant caption as the pair share the same verb 'cut'. In another PoS-MMEN focusing on noun tags solely, the two remain irrelevant. As the relevant/irrelevant sets differ within each PoS-MMEN, these embeddings specialise to that PoS. It is important to note that, although the same visual features are used as input for all PoS-MMEN, the fact that we build one embedding space per PoS trains multiple visual embedding functions f k that can be seen as multiple views of the video sequence. PoS-Aware Unified Action Embedding The previous section describes how to extract different PoS from captions and how to build PoS-specific MMENs. These PoS-MMENs can already be used alone for PoSspecific retrieval tasks, for instance a verb-retrieval task (e.g. retrieve all videos where "cut" is relevant) or a nounretrieval task. 1 More importantly, the output of different PoS-MMENs can be combined to perform more complex tasks, including the one we are interested in, namely finegrained action retrieval. Let us denote the k th PoS-MMEN visual and textual embedding functions by f k : V → Ω k and g k : T → Ω k . We define:v i = e v (f 1 vi , f 2 vi , . . . , f K vi ) t i = e t (g 1 t 1 i , g 2 t 2 i , . . . , g K t K i )(6) where e v and e t are encoding functions that combine the outputs of the PoS-MMENs. We explore multiple pooling functions for e v and e t : concatenation, max, average -the latter two assume all Ω k share the same dimensionality. Whenv i ,t i have the same dimension, we can perform action retrieval by directly computing the distance between these representations. We instead propose to train a final PoS-agnostic MMEN that unifies the representation, leading to our final JPoSE model. Joint Part of Speech Embedding (JPoSE). Considering the PoS-aware representationsv i andt i as input and, still following our learning to rank approach, we learn the parametersθf andθĝ of the two embedding functionŝ f :V → Γ andĝ :T → Γ which project in our final embedding space Γ. We again consider this as the task of building a single MMEN with the inputsv i andt i , and follow the process described in Sec. 3.1. In other words, we train using the loss defined in Equation (5), which we denotê L here, which combines two cross-modal and two withinmodal losses using the triplets T v,t , T t,v , T v,v , T t,t formed using relevance between videos and captions based on the action retrieval task. As relevance here is not PoS-aware, we refer to this loss as PoS-agnostic. This is illustrated in Fig. 2. We learn the multiple PoS-MMENs and the final MMEN jointly with the following combined loss: L(θ, θ 1 , . . . θ K ) =L(θ) + K k=1 α k L k (θ k )(7) where α k are weighting factors,L is the PoS-agnostic loss described above and L k are the PoS-aware losses corresponding to the K PoS-MMENs. Experiments We first tackle fine-grained action retrieval on the EPIC dataset [6] (Sec. 4.1) and then the general video retrieval task on the MSR-VTT dataset [39] (Sec. 4.2). This allows us to explore two different tasks using the proposed multimodal embeddings. The large English spaCy parser [1] was used to find the Part Of Speech (PoS) tags and disentangle them in the captions of both datasets. Statistics on the most frequent PoS tags are shown in Table 1. As these statistics show, EPIC contains mainly nouns and verbs, while MSR-VTT has longer captions and more nouns. This will have an impact of the PoS chosen for each dataset when building the JPoSE model. Fine-Grained Action Retrieval on EPIC Dataset. The EPIC dataset [6] is an egocentric dataset with 32 participants cooking in their own kitchens who then narrated the actions in their native language. The narrations were translated to English but maintain the open vocabulary selected by the participants. We employ the released free-form narrations to use this dataset for fine-grained action retrieval. We follow the provided train/test splits. Note that by construction there are two test sets: Seen and Unseen, referring to whether the kitchen has been seen in the training set. We follow the terminology from [6], and note that this terminology should not be confused with the zeroshot literature which distinguishes seen/unseen classes. The actual sequences are strictly disjoint between all sets. Additionally, we train only on the many-shot examples from EPIC excluding all examples of the few shot classes from the training set. This ensures each action has more than 100 relevant videos during training and increases the number of zero-shot examples in both test sets. Building relevance sets for retrieval. The EPIC dataset offers an opportunity for fine-grained action retrieval, as the open vocabulary has been grouped into semantically relevant verb and noun classes for the action recognition challenge. For example, 'put', 'place' and 'put-down' are grouped into one class. As far as we are aware, this paper presents the first attempt to use the open vocabulary narrations released to the community. We determine retrieval relevance scores from these semantically grouped verb and noun classes 2 , defined in [6]. These indicate which videos and captions should be considered related to each other. Following these semantic groups, a query 'put mug' and a video with 'place cup' in its caption are considered relevant as 'place' and 'put' share the same verb class and 'mug' and 'cup' share the same noun class. Subsequently, we define the triplets T v,t , T t,v , T v,v , T t,t used to train the MMEN models and to compute the lossL in JPoSE. When training a PoS-MMEN, two videos are considered relevant only within that PoS. Accordingly, 'put onion' and 'put mug' are relevant for verb retrieval, whereas, 'put cup' and 'take mug' are for noun retrieval. The corresponding PoS-based relevances define the triplets T k for L k . Experimental Details Video features. We extract flow and appearance features using the TSN BNInception model [37] Text features. We map each lemmatised word to its feature vector using a 100-dimension Word2Vec model, trained on the Wikipedia corpus. Multiple word vectors with the same part of speech were aggregated by averaging. We also experimented with the pre-trained 300-dimension Glove model, and found the results to be similar. Architecture details. We implement f k and g k in each MMEN as a 2 layer perceptron (fully connected layers) with ReLU. Additionally, the input vectors and output vectors are L2 normalised. In all cases, we set the dimension of the embedding space to 256, a dimension we found to be suitable across all settings. We use a single layer perceptron with shared weights forf andĝ that we initialise with PCA. Training details. The triplet weighting parameters are set to λ v,v = λ t,t = 0.1 and λ v,t = λ t,v = 1.0 and the loss weightings α k are set to 1. The embedding models were implemented in Python using the Tensorflow library. We trained the models with an Adam solver and a learning rate of 1e −5 , considering batch sizes of 256, where for each query we sample 100 random triplets from the corresponding T v,t , T t,v , T v,v , T t,t sets. The training in general converges after a few thousand iterations, we report all results after 4000 iterations. Evaluation metrics. We report mean average precision (mAP), i.e. for each query we consider the average precision over all relevant elements and take the mean over all queries. We consider each element in the test set as a query in turns. When reporting within-modal retrieval mAP, the corresponding item (video or caption) is removed from the test set for that query. Results First, we consider cross-modal and within-modal finegrained action retrieval. Then, we present an ablation study as well as qualitative results to get more insights. Finally we show that our approach is well-suited for zero-shot settings. These models are also compared to standard baselines. The Random Baseline randomly ranks all the database items, providing a lower bound on the mAP scores. The CCA-baseline applies Canonical Correlation Analysis to both modalities v i and t i to find a joint embedding space for cross-modal retrieval [9]. Finally, Features (Word2Vec) and Features (Video), which are only defined for withinmodal retrieval (i.e. vv and tt), show the performance when we directly use the video representation v i or the averaged Word2Vec caption representation t i . Cross-modal retrieval. Table 11 presents cross-modal results for fine-grained action retrieval. The main observation is that the proposed JPoSE outperforms all the MMEN variants and the baselines for both video-to-text (vt) and textto-video retrieval (tv), on both test sets. We also note that MMEN ([Verb, Noun]) outperforms other MMEN variants, showing the benefit of learning specialised embeddings. Yet the full JPoSE is crucial to get the best results. Within-modal retrieval. Table 12 shows the withinmodal retrieval results for both text-to-text (tt) and videoto-video (vv) retrieval. Again, JPoSE outperforms all the flavours of MMEN on both test sets. This shows that by learning a cross-modal embedding we inject information from the other modality that helps to better disambiguate and hence to improve the search. Ablation study. We evaluate the role of the components of the proposed JPoSE model, for both cross-modal and within-modal retrieval. Table 4 reports results comparing different options for the encoding functions e v and e t in addition to learning the model jointly both with and without learned functionsf andĝ. This confirms that the proposed approach is the best option. In the supplementary material, we also compare the performance when using the closed vocabulary classes from EPIC to learn the embedding. Table 5 shows the zero-shot (ZS) counts in both test sets. In total 12% of the videos in both test sets are zero-shot instances. We separate cases where the noun is present in the training set but the verb is not, denoted by ZSV (zero-shot verb), from ZSN (zero-shot noun) where the verb is present but not the noun. Cross-modal ZS retrieval results for this interesting setting are shown in Table 6. We compare JPoSE to MMEN (Caption) and baselines. Results show that the proposed JPoSE model clearly improves over these zero-shot settings, thanks to the different views captured by the multiple PoS embeddings, specialised to acts and objects. Qualitative results. Fig. 3 illustrates both video-to-text and text-to-video retrieval. For several queries, it shows the relevance of the top-50 retrieved items (relevant in green, nonrelevant in grey). Fig. 4 illustrates our motivation that disentangling PoS embeddings would learn different visual functions. It presents maximum activation examples on chosen neurons within f i for both verb and noun embeddings. Each cluster represents the 9 videos that respond maximally to one of these neurons 3 . We can remark that noun activations indeed correspond to objects of shared appearance occurring in different actions (in the figure, chopping boards in one and cutlery in the second), while verb embedding neuron General Video Retrieval on MSR-VTT Dataset. We select MSR-VTT [39] as a public dataset for general video retrieval. Originally used for video captioning, this large-scale video understanding dataset is increasingly evaluated for video-to-text and text-to-video retrieval [8,22,24,41,23]. We follow the code and setup of [22] using the same train/test split that includes 7,656 training videos each with 20 different captions describing the scene and 1000 test videos with one caption per video. We also follow the evaluation protocol in [22] and compute recall@k (R@K) and median rank (MR). In contrast to the EPIC dataset, there is no semantic groupings of the captions in MSR-VTT. Each caption is considered relevant only for a single video, and two captions describing different videos are considered irrelevant even if they share semantic similarities. Furthermore, disentangling captions yields further semantic similarities. For example, "A cooking tutorial" and "A person is cooking", for a verb-MMEN, will be considered irrelevant as they belong to different videos even though they share the same single verb 'cook'. Consequently, we can not directly apply JPoSE as proposed in Sec. 3.3. Instead, we adapt JPoSE to this problem as follows. We use the Mixture-of-Expert Embeddings (MEE) model from [22], as our core MMEN network. In Table 7. MSR-VTT Video-Caption Retrieval results. *We include results from [22], only available for Text-to-Video retrieval. fact, MEE is a form of multi-modal embedding network in that it embeds videos and captions into the same space. We instead focus on assessing whether disentangling PoS and learning multiple PoS-aware embeddings produce better results. In this adapted JPoSE we encode the output of the disentangled PoS-MMENs with e v and e t (i.e. concatenated) and use NetVLAD [3] to aggregate Word2Vec representations. Instead of the combined loss in Equation (7), we use the pair loss, used also in [22]: L(θ) = 1 B B i j =i max γ + d(f vi , g ti ) − d(f vi , g tj ), 0 + max γ + d(f vi , g ti ) − d(f vj , g ti ), 0(8) This same loss is used when we train different MMENs. Visual and text features. We use appearance, flow, audio and facial pre-extracted visual features provided from [22]. For the captions, we extract the encodings ourselves 4 using the same Word2Vec model as for EPIC. Results. We report on video-to-text and text-to-video retrieval on MSR-VTT in Table 7 for the standard baselines and several MMEN variants. Comparing MMENs, we note that nouns are much more informative than verbs for this retrieval task. MMEN results with other PoS tags (shown in the supplementary) are even lower, indicating that they are 4 Note that this explains the difference between the results reported in [22] (shown in the first row of the Table 7) and MMEN (Caption). not informative alone. Building on these findings, we report results of a JPoSE combining two MMENs, one for nouns, and one for the remainder of the caption (Caption\Noun). Our adapted JPoSE model consistently outperforms fullcaption single embedding for both video-to-text and textto-video retrieval. We report other PoS disentanglement results in supplementary material. Qualitative results. Figure 5 shows qualitative results comparing using the full caption and JPoSE noting the disentangled model's ability to commonly rank videos closer to their corresponding captions. Conclusion We have proposed a method for fine-grained action retrieval. By learning distinct embeddings for each PoS, our model is able to combine these in a principal manner and to create a space suitable for action retrieval, outperforming approaches which learn such a space through captions alone. We tested our method on a fine-grained action retrieval dataset, EPIC, using the open vocabulary labels. Our results demonstrate the ability for the method to generalise to zero-shot cases. Additionally, we show the applicability of the notion of disentangling the caption for the general video-retrieval task on MSR-VTT. Table 9. Noun retrieval task results on the seen test set of EPIC-Kitchens. Supplementary Material A. Individual Part-of-Speech Retrieval (Sec. 3 .3) In the main manuscript, we report results on the task of fine-grained action retrieval. For completion, we here present results on individual Part-of-Speech (PoS) retrieval tasks. In Table 8, we report results for fine-grained verb retrieval (i.e. only retrieve the relevant verb/action in the video). We include the standard baselines and we additionally report the results obtained by a PoS-MMEN, that is a single embedding for verbs solely. We compare this to our proposed multi-embedding JPoSE. Using JPoSE produces better (or the same) results for both cross-modal and within-modal searches. Similarly, in Table 9, we compare results for fine-grained noun retrieval (i.e. only retrieve the relevant noun/object in the video). We show similar increases in mAP over crossmodal and within-modal searches. This indicates the complementary PoS information, from the other PoS embedding as well as the PoS-aware action embedding, helps to better define the individual embedding space. B. Closed vs Open Vocabulary Embedding C. Text embedding Using RNN We provide here the results of replacing the text embedding function, g, with an RNN instead of the two layer perceptron for the MMEN method. The RNN was modelled as a Gated Recurrent Unit (GRU). Captions were capped and zero-padded to a maximum length of 15 words. Adding a layer on top of the GRU proved not to be useful. Results of the RNN in the experiments are given under the name MMEN (Caption RNN). Given the singular verb and low noun count RNNs were not tested for the individual PoS-MMENs. Cross-Modal and Within-Modal Results can be seen in Tables 11 and 12 respectively. The inclusion of the RNN sees improvements in mAP performance for tv, vv and tt compared to MMEN (caption). However, compared to MMEN ([Verb,Noun]) or JPoSE (Verb,Noun) using the entire caption still leads to worse results for both cross and within modal retrieval. D. Additional MSR-VTT Experiments (Sec. 4.2) Table 13 of this supplementary is an expanded version of Table 7 in the main paper testing a variety of different combinations for PoS. For each row, an average of 10 runs is reported. This experiment also includes the removal of the NetVLAD layer in the MMEN, substituting it with mean pooling which we label as AVG. Results show that, on their own, Determinants, Adjectives and Adpositions achieve very poor results. We also report three JPoSE disentanglement options: (Verb, Noun), (Caption\Verb, Verb) and the one in the main paper (Capiton\Noun, Noun). The table shows that the best results are achieved when nouns are disentangled from the rest of the caption. Table 13. MSR-VTT Video-Caption Retrieval results using recall@k (R@k, higher is better) and median Rank (MR, lower is better). For each row, an average of 10 runs is reported. *We include results from [22], only available for Text-to-Video retrieval.
4,495
1908.03477
2968848930
We address the problem of cross-modal fine-grained action retrieval between text and video. Cross-modal retrieval is commonly achieved through learning a shared embedding space, that can indifferently embed modalities. In this paper, we propose to enrich the embedding by disentangling parts-of-speech (PoS) in the accompanying captions. We build a separate multi-modal embedding space for each PoS tag. The outputs of multiple PoS embeddings are then used as input to an integrated multi-modal space, where we perform action retrieval. All embeddings are trained jointly through a combination of PoS-aware and PoS-agnostic losses. Our proposal enables learning specialised embedding spaces that offer multiple views of the same embedded entities. We report the first retrieval results on fine-grained actions for the large-scale EPIC dataset, in a generalised zero-shot setting. Results show the advantage of our approach for both video-to-text and text-to-video action retrieval. We also demonstrate the benefit of disentangling the PoS for the generic task of cross-modal video retrieval on the MSR-VTT dataset.
Hahn al @cite_35 use two LSTMs to directly project videos into the Word2Vec embedding space. This method is evaluated on higher-level activities, showing that such a visual embedding aligns well with the learned space of Word2Vec to perform zero-shot recognition of these coarser-grained classes. Miech al @cite_11 found that using NetVLAD @cite_29 results in an increase in accuracy over GRUs or LSTMs for aggregation of both visual and text features. A follow up on this work @cite_33 learns a mixture of experts embedding from multiple modalities such as appearance, motion, audio or face features. It learns a single output embedding which is the weighted similarity between the different implicit visual-text embeddings. Recently, Miech al @cite_19 propose the HowTo100M dataset: A large dataset collected automatically using generated captions from youtube of how to tasks'. They find that fine-tuning on these weakly-paired video clips allows for state-of-the-art performance on a number of different datasets.
{ "abstract": [ "We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the art results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.", "Joint understanding of video and language is an active research area with many applications. Prior work in this domain typically relies on learning text-video embeddings. One difficulty with this approach, however, is the lack of large-scale annotated video-caption datasets for training. To address this issue, we aim at learning text-video embeddings from heterogeneous data sources. To this end, we propose a Mixture-of-Embedding-Experts (MEE) model with ability to handle missing input modalities during training. As a result, our framework can learn improved text-video embeddings simultaneously from image and video datasets. We also show the generalization of MEE to other input modalities such as face descriptors. We evaluate our method on the task of video retrieval and report results for the MPII Movie Description and MSR-VTT datasets. The proposed MEE model demonstrates significant improvements and outperforms previously reported methods on both text-to-video and video-to-text retrieval tasks. Code is available at: this https URL", "We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the \"Vector of Locally Aggregated Descriptors\" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state of-the-art compact image representations on standard image retrieval benchmarks.", "Learning text-video embeddings usually requires a dataset of video clips with manually provided captions. However, such datasets are expensive and time consuming to create and therefore difficult to obtain on a large scale. In this work, we propose instead to learn such embeddings from video data with readily available natural language annotations in the form of automatically transcribed narrations. The contributions of this work are three-fold. First, we introduce HowTo100M: a large-scale dataset of 136 million video clips sourced from 1.22M narrated instructional web videos depicting humans performing and describing over 23k different visual tasks. Our data collection procedure is fast, scalable and does not require any additional manual annotation. Second, we demonstrate that a text-video embedding trained on this data leads to state-of-the-art results for text-to-video retrieval and action localization on instructional video datasets such as YouCook2 or CrossTask. Finally, we show that this embedding transfers well to other domains: fine-tuning on generic Youtube videos (MSR-VTT dataset) and movies (LSMDC dataset) outperforms models trained on these datasets alone. Our dataset, code and models will be publicly available at: this http URL.", "Common video representations often deploy an average or maximum pooling of pre-extracted frame features over time. Such an approach provides a simple means to encode feature distributions, but is likely to be suboptimal. As an alternative, we here explore combinations of learnable pooling techniques such as Soft Bag-of-words, Fisher Vectors , NetVLAD, GRU and LSTM to aggregate video features over time. We also introduce a learnable non-linear network unit, named Context Gating, aiming at modeling in-terdependencies between features. We evaluate the method on the multi-modal Youtube-8M Large-Scale Video Understanding dataset using pre-extracted visual and audio features. We demonstrate improvements provided by the Context Gating as well as by the combination of learnable pooling methods. We finally show how this leads to the best performance, out of more than 600 teams, in the Kaggle Youtube-8M Large-Scale Video Understanding challenge." ], "cite_N": [ "@cite_35", "@cite_33", "@cite_29", "@cite_19", "@cite_11" ], "mid": [ "2908138876", "2796207103", "2179042386", "2948859046", "2706729717" ] }
Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings
With the onset of the digital age, millions of hours of video are being recorded and searching this data is becoming a monumental task. It is even more tedious when searching shifts from video-level labels, such as 'dancing' or 'skiing', to short action segments like 'cracking eggs' or 'tightening a screw'. In this paper, we focus on the latter and refer to them as fine-grained actions. We thus explore the task of fine-grained action retrieval where both queries and retrieved results can be either a video sequence, or a textual caption describing the fine-grained action. Such free-form action descriptions allow for a more subtle characterisation of actions but require going beyond training a classifier on a predefined set of action labels [20,30]. As is common in cross-modal search tasks [26,36], we learn a shared embedding space onto which we project both videos and captions. By nature, fine-grained actions can Figure 1. We target fine-grained action retrieval. Action captions are broken using part-of-speech (PoS) parsing. We create separate embedding spaces for the relevant PoS (e.g. Noun or Verb) and then combine these embeddings into a shared embedding space for action retrieval (best viewed in colour). be described by an actor, an act and the list of objects involved in the interaction. We thus propose to learn a separate embedding for each part-of-speech (PoS), such as for instance verbs, nouns or adjectives. This is illustrated in Fig. 1 for two PoS (verbs and nouns). When embedding verbs solely, relevant entities are those that share the same verb/act regardless of the nouns/objects used. Conversely, for a PoS embedding focusing on nouns, different actions performed on the same object are considered relevant entities. This enables a PoS-aware embedding, specialised for retrieving a variety of relevant entities, given that PoS. The outputs from the multiple PoS embedding spaces are then combined within an encoding module that produces the final action embedding. We train our approach end-to-end, jointly optimising the multiple PoS embeddings and the final fine-grained action embedding. This approach has a number of advantages over training a single embedding space as is standardly done [7,8,15,22,24]. Firstly, this process builds different embeddings that can be seen as different views of the data, which contribute to the final goal in a collaborative manner. Secondly, it allows to inject, in a principled way, additional information but without requiring additional annotation, as parsing a caption for PoS is done automatically. Finally, when considering a single PoS at a time, for instance verbs, the cor-responding PoS-embedding learns to generalise across the variety of actions involving each verb (e.g. the many ways 'open' can be used). This generalisation is key to tackling more actions including new ones not seen during training. We present the first retrieval results for the recent largescale EPIC dataset [6] (Sec 4.1), utilising the released freeform narrations, previously unexplored for this dataset, as our supervision. Additionally, we show that our second contribution, learning PoS-aware embeddings, is also valuable for general video retrieval by reporting results on the MSR-VTT dataset [39] (Sec. 4.2). Method Our aim is to learn representations suitable for crossmodal search where the query modality is different from the target modality. Specifically, we use video sequences with textual captions/descriptions and perform video-to-text (vt) or text-to-video (tv) retrieval tasks. Additionally, we would like to make sure that classical search (where the query and the retrieved results have the same modalities) could still be performed in that representation space. The latter are referred to as video-to-video (vv) and text-to-text (tt) search tasks. As discussed in the previous section, several possibilities exist, the most common being embedding both modalities in a shared space such that, regardless of the modality, the representation of two relevant entities in that space are close to each other, while the representation of two nonrelevant entities are far apart. We first describe how to build such a joint embedding between two modalities, enforcing both cross-modal and within-modal constraints (Sec. 3.1). Then, based on the knowledge that different parts of the caption encode different aspects of an action, we describe how to leverage this information and build several disentangled Part of Speech embeddings (Sec. 3.2). Finally, we propose a unified representation well-suited for fine-grained action retrieval (Sec. 3.3). Multi-Modal Embedding Network (MMEN) This section describes a Multi-Modal Embedding Network (MMEN) that encodes the video sequence and the text caption into a common descriptor space. Let {(v i , t i )|v i ∈ V, t i ∈ T } be a set of videos with v i being the visual representation of the i th video sequence and t i the corresponding textual caption. Our aim is to learn two embedding functions f : V → Ω and g : T → Ω, such that f (v i ) and g(t i ) are close in the embedded space Ω. Note that f and g can be linear projection matrices or more complex functions e.g. deep neural networks. We denote the parameters of the embedding functions f and g by θ f and θ g respectively, and we learn them jointly with a weighted combination of two cross-modal (L v,t , L t,v ) and two within-modal (L v,v , L t,t ) triplet losses. Note that other point-wise, pairwise or list-wise losses can also be considered as alternatives to the triplet loss. The cross-modal losses are crucial to the task and en-sure that the representations of a query and a relevant item for that query from a different modality are closer than the representations of this query and a non-relevant item. We use cross-modal triplet losses [19,36]: L v,t (θ) = (i,j,k)∈Tv,t max γ + d(f vi , g tj ) − d(f vi , g t k ), 0 T v,t = {(i, j, k) | v i ∈ V, t j ∈ T i+ , t k ∈ T i− } (1) L t,v (θ) = (i,j,k)∈Tt,v max γ + d(g ti , f vj ) − d(g ti , f v k ), 0 T t,v = {(i, j, k) | t i ∈ T, v j ∈ V i+ , v k ∈ V i− }(2) where γ is a constant margin, θ = [θ f , θ g ], and d(.) is the distance function in the embedded space Ω. T i+ , T i− respectively define sets of relevant and non relevant captions and V i+ , V i− the sets of relevant and non relevant videos sequences for the multi-modal object (v i , t i ). To simplify the notation, f vi denotes f (v i ) ∈ Ω and g tj denotes g(t j ) ∈ Ω. Additionally, within-modal losses, also called structure preserving losses [19,36], ensure that the neighbourhood structure within each modality is preserved in the newly built joint embedding space. Formally, L v,v (θ) = (i,j,k)∈Tv,v max γ + d(f vi , f vj ) − d(f vi , f v k ), 0 T v,v = {(i, j, k) | v i ∈ V, v j ∈ V i+ , v k ∈ V i− } (3) L t,t (θ) = (i,j,k)∈Tt,t max γ + d(g ti , g tj ) − d(g ti , g t k ), 0 T t,t = {(i, j, k) | t i ∈ T, t j ∈ T i+ , t k ∈ T i− }(4) using the same notation as before. The final loss used for the MMEN network is a weighted combination of these four losses, summed over all triplets in T defined as follows: L(θ) = λ v,v L v,v + λ v,t L v,t + λ t,v L t,v + λ t,t L t,t (5) where λ is a weighting for each loss term. Disentangled Part of Speech Embeddings The previous section described the generic Multi-Modal Embedding Network (MMEN). In this section, we propose to disentangle different caption components so each component is encoded independently in its own embedding space. To do this, we first break down the text caption into different PoS tags. For example, the caption "I divided the onion into pieces using wooden spoon" can be divided into verbs, [divide, using], pronouns, [I], nouns, [onion, pieces, spoon] and adjectives, [wooden]. In our experiments, we focus on the most relevant ones for finegrained action recognition: verbs and nouns, but we explore other types for general video retrieval. We extract all words from a caption for a given PoS tag and train one MMEN to only embed these words and the video representation in the same space. We refer to it as a PoS-MMEN. To train a PoS-MMEN, we propose to adapt the notion of relevance specifically to the PoS. This has a direct impact on the sets V i+ , V i− , T i+ , T i− defined in Equations (1)-(4). For example, the caption 'cut tomato' is disentangled into the verb 'cut' and the noun 'tomato'. Consider a PoS-MMEN focusing on verb tags solely. The caption 'cut carrots' is a relevant caption as the pair share the same verb 'cut'. In another PoS-MMEN focusing on noun tags solely, the two remain irrelevant. As the relevant/irrelevant sets differ within each PoS-MMEN, these embeddings specialise to that PoS. It is important to note that, although the same visual features are used as input for all PoS-MMEN, the fact that we build one embedding space per PoS trains multiple visual embedding functions f k that can be seen as multiple views of the video sequence. PoS-Aware Unified Action Embedding The previous section describes how to extract different PoS from captions and how to build PoS-specific MMENs. These PoS-MMENs can already be used alone for PoSspecific retrieval tasks, for instance a verb-retrieval task (e.g. retrieve all videos where "cut" is relevant) or a nounretrieval task. 1 More importantly, the output of different PoS-MMENs can be combined to perform more complex tasks, including the one we are interested in, namely finegrained action retrieval. Let us denote the k th PoS-MMEN visual and textual embedding functions by f k : V → Ω k and g k : T → Ω k . We define:v i = e v (f 1 vi , f 2 vi , . . . , f K vi ) t i = e t (g 1 t 1 i , g 2 t 2 i , . . . , g K t K i )(6) where e v and e t are encoding functions that combine the outputs of the PoS-MMENs. We explore multiple pooling functions for e v and e t : concatenation, max, average -the latter two assume all Ω k share the same dimensionality. Whenv i ,t i have the same dimension, we can perform action retrieval by directly computing the distance between these representations. We instead propose to train a final PoS-agnostic MMEN that unifies the representation, leading to our final JPoSE model. Joint Part of Speech Embedding (JPoSE). Considering the PoS-aware representationsv i andt i as input and, still following our learning to rank approach, we learn the parametersθf andθĝ of the two embedding functionŝ f :V → Γ andĝ :T → Γ which project in our final embedding space Γ. We again consider this as the task of building a single MMEN with the inputsv i andt i , and follow the process described in Sec. 3.1. In other words, we train using the loss defined in Equation (5), which we denotê L here, which combines two cross-modal and two withinmodal losses using the triplets T v,t , T t,v , T v,v , T t,t formed using relevance between videos and captions based on the action retrieval task. As relevance here is not PoS-aware, we refer to this loss as PoS-agnostic. This is illustrated in Fig. 2. We learn the multiple PoS-MMENs and the final MMEN jointly with the following combined loss: L(θ, θ 1 , . . . θ K ) =L(θ) + K k=1 α k L k (θ k )(7) where α k are weighting factors,L is the PoS-agnostic loss described above and L k are the PoS-aware losses corresponding to the K PoS-MMENs. Experiments We first tackle fine-grained action retrieval on the EPIC dataset [6] (Sec. 4.1) and then the general video retrieval task on the MSR-VTT dataset [39] (Sec. 4.2). This allows us to explore two different tasks using the proposed multimodal embeddings. The large English spaCy parser [1] was used to find the Part Of Speech (PoS) tags and disentangle them in the captions of both datasets. Statistics on the most frequent PoS tags are shown in Table 1. As these statistics show, EPIC contains mainly nouns and verbs, while MSR-VTT has longer captions and more nouns. This will have an impact of the PoS chosen for each dataset when building the JPoSE model. Fine-Grained Action Retrieval on EPIC Dataset. The EPIC dataset [6] is an egocentric dataset with 32 participants cooking in their own kitchens who then narrated the actions in their native language. The narrations were translated to English but maintain the open vocabulary selected by the participants. We employ the released free-form narrations to use this dataset for fine-grained action retrieval. We follow the provided train/test splits. Note that by construction there are two test sets: Seen and Unseen, referring to whether the kitchen has been seen in the training set. We follow the terminology from [6], and note that this terminology should not be confused with the zeroshot literature which distinguishes seen/unseen classes. The actual sequences are strictly disjoint between all sets. Additionally, we train only on the many-shot examples from EPIC excluding all examples of the few shot classes from the training set. This ensures each action has more than 100 relevant videos during training and increases the number of zero-shot examples in both test sets. Building relevance sets for retrieval. The EPIC dataset offers an opportunity for fine-grained action retrieval, as the open vocabulary has been grouped into semantically relevant verb and noun classes for the action recognition challenge. For example, 'put', 'place' and 'put-down' are grouped into one class. As far as we are aware, this paper presents the first attempt to use the open vocabulary narrations released to the community. We determine retrieval relevance scores from these semantically grouped verb and noun classes 2 , defined in [6]. These indicate which videos and captions should be considered related to each other. Following these semantic groups, a query 'put mug' and a video with 'place cup' in its caption are considered relevant as 'place' and 'put' share the same verb class and 'mug' and 'cup' share the same noun class. Subsequently, we define the triplets T v,t , T t,v , T v,v , T t,t used to train the MMEN models and to compute the lossL in JPoSE. When training a PoS-MMEN, two videos are considered relevant only within that PoS. Accordingly, 'put onion' and 'put mug' are relevant for verb retrieval, whereas, 'put cup' and 'take mug' are for noun retrieval. The corresponding PoS-based relevances define the triplets T k for L k . Experimental Details Video features. We extract flow and appearance features using the TSN BNInception model [37] Text features. We map each lemmatised word to its feature vector using a 100-dimension Word2Vec model, trained on the Wikipedia corpus. Multiple word vectors with the same part of speech were aggregated by averaging. We also experimented with the pre-trained 300-dimension Glove model, and found the results to be similar. Architecture details. We implement f k and g k in each MMEN as a 2 layer perceptron (fully connected layers) with ReLU. Additionally, the input vectors and output vectors are L2 normalised. In all cases, we set the dimension of the embedding space to 256, a dimension we found to be suitable across all settings. We use a single layer perceptron with shared weights forf andĝ that we initialise with PCA. Training details. The triplet weighting parameters are set to λ v,v = λ t,t = 0.1 and λ v,t = λ t,v = 1.0 and the loss weightings α k are set to 1. The embedding models were implemented in Python using the Tensorflow library. We trained the models with an Adam solver and a learning rate of 1e −5 , considering batch sizes of 256, where for each query we sample 100 random triplets from the corresponding T v,t , T t,v , T v,v , T t,t sets. The training in general converges after a few thousand iterations, we report all results after 4000 iterations. Evaluation metrics. We report mean average precision (mAP), i.e. for each query we consider the average precision over all relevant elements and take the mean over all queries. We consider each element in the test set as a query in turns. When reporting within-modal retrieval mAP, the corresponding item (video or caption) is removed from the test set for that query. Results First, we consider cross-modal and within-modal finegrained action retrieval. Then, we present an ablation study as well as qualitative results to get more insights. Finally we show that our approach is well-suited for zero-shot settings. These models are also compared to standard baselines. The Random Baseline randomly ranks all the database items, providing a lower bound on the mAP scores. The CCA-baseline applies Canonical Correlation Analysis to both modalities v i and t i to find a joint embedding space for cross-modal retrieval [9]. Finally, Features (Word2Vec) and Features (Video), which are only defined for withinmodal retrieval (i.e. vv and tt), show the performance when we directly use the video representation v i or the averaged Word2Vec caption representation t i . Cross-modal retrieval. Table 11 presents cross-modal results for fine-grained action retrieval. The main observation is that the proposed JPoSE outperforms all the MMEN variants and the baselines for both video-to-text (vt) and textto-video retrieval (tv), on both test sets. We also note that MMEN ([Verb, Noun]) outperforms other MMEN variants, showing the benefit of learning specialised embeddings. Yet the full JPoSE is crucial to get the best results. Within-modal retrieval. Table 12 shows the withinmodal retrieval results for both text-to-text (tt) and videoto-video (vv) retrieval. Again, JPoSE outperforms all the flavours of MMEN on both test sets. This shows that by learning a cross-modal embedding we inject information from the other modality that helps to better disambiguate and hence to improve the search. Ablation study. We evaluate the role of the components of the proposed JPoSE model, for both cross-modal and within-modal retrieval. Table 4 reports results comparing different options for the encoding functions e v and e t in addition to learning the model jointly both with and without learned functionsf andĝ. This confirms that the proposed approach is the best option. In the supplementary material, we also compare the performance when using the closed vocabulary classes from EPIC to learn the embedding. Table 5 shows the zero-shot (ZS) counts in both test sets. In total 12% of the videos in both test sets are zero-shot instances. We separate cases where the noun is present in the training set but the verb is not, denoted by ZSV (zero-shot verb), from ZSN (zero-shot noun) where the verb is present but not the noun. Cross-modal ZS retrieval results for this interesting setting are shown in Table 6. We compare JPoSE to MMEN (Caption) and baselines. Results show that the proposed JPoSE model clearly improves over these zero-shot settings, thanks to the different views captured by the multiple PoS embeddings, specialised to acts and objects. Qualitative results. Fig. 3 illustrates both video-to-text and text-to-video retrieval. For several queries, it shows the relevance of the top-50 retrieved items (relevant in green, nonrelevant in grey). Fig. 4 illustrates our motivation that disentangling PoS embeddings would learn different visual functions. It presents maximum activation examples on chosen neurons within f i for both verb and noun embeddings. Each cluster represents the 9 videos that respond maximally to one of these neurons 3 . We can remark that noun activations indeed correspond to objects of shared appearance occurring in different actions (in the figure, chopping boards in one and cutlery in the second), while verb embedding neuron General Video Retrieval on MSR-VTT Dataset. We select MSR-VTT [39] as a public dataset for general video retrieval. Originally used for video captioning, this large-scale video understanding dataset is increasingly evaluated for video-to-text and text-to-video retrieval [8,22,24,41,23]. We follow the code and setup of [22] using the same train/test split that includes 7,656 training videos each with 20 different captions describing the scene and 1000 test videos with one caption per video. We also follow the evaluation protocol in [22] and compute recall@k (R@K) and median rank (MR). In contrast to the EPIC dataset, there is no semantic groupings of the captions in MSR-VTT. Each caption is considered relevant only for a single video, and two captions describing different videos are considered irrelevant even if they share semantic similarities. Furthermore, disentangling captions yields further semantic similarities. For example, "A cooking tutorial" and "A person is cooking", for a verb-MMEN, will be considered irrelevant as they belong to different videos even though they share the same single verb 'cook'. Consequently, we can not directly apply JPoSE as proposed in Sec. 3.3. Instead, we adapt JPoSE to this problem as follows. We use the Mixture-of-Expert Embeddings (MEE) model from [22], as our core MMEN network. In Table 7. MSR-VTT Video-Caption Retrieval results. *We include results from [22], only available for Text-to-Video retrieval. fact, MEE is a form of multi-modal embedding network in that it embeds videos and captions into the same space. We instead focus on assessing whether disentangling PoS and learning multiple PoS-aware embeddings produce better results. In this adapted JPoSE we encode the output of the disentangled PoS-MMENs with e v and e t (i.e. concatenated) and use NetVLAD [3] to aggregate Word2Vec representations. Instead of the combined loss in Equation (7), we use the pair loss, used also in [22]: L(θ) = 1 B B i j =i max γ + d(f vi , g ti ) − d(f vi , g tj ), 0 + max γ + d(f vi , g ti ) − d(f vj , g ti ), 0(8) This same loss is used when we train different MMENs. Visual and text features. We use appearance, flow, audio and facial pre-extracted visual features provided from [22]. For the captions, we extract the encodings ourselves 4 using the same Word2Vec model as for EPIC. Results. We report on video-to-text and text-to-video retrieval on MSR-VTT in Table 7 for the standard baselines and several MMEN variants. Comparing MMENs, we note that nouns are much more informative than verbs for this retrieval task. MMEN results with other PoS tags (shown in the supplementary) are even lower, indicating that they are 4 Note that this explains the difference between the results reported in [22] (shown in the first row of the Table 7) and MMEN (Caption). not informative alone. Building on these findings, we report results of a JPoSE combining two MMENs, one for nouns, and one for the remainder of the caption (Caption\Noun). Our adapted JPoSE model consistently outperforms fullcaption single embedding for both video-to-text and textto-video retrieval. We report other PoS disentanglement results in supplementary material. Qualitative results. Figure 5 shows qualitative results comparing using the full caption and JPoSE noting the disentangled model's ability to commonly rank videos closer to their corresponding captions. Conclusion We have proposed a method for fine-grained action retrieval. By learning distinct embeddings for each PoS, our model is able to combine these in a principal manner and to create a space suitable for action retrieval, outperforming approaches which learn such a space through captions alone. We tested our method on a fine-grained action retrieval dataset, EPIC, using the open vocabulary labels. Our results demonstrate the ability for the method to generalise to zero-shot cases. Additionally, we show the applicability of the notion of disentangling the caption for the general video-retrieval task on MSR-VTT. Table 9. Noun retrieval task results on the seen test set of EPIC-Kitchens. Supplementary Material A. Individual Part-of-Speech Retrieval (Sec. 3 .3) In the main manuscript, we report results on the task of fine-grained action retrieval. For completion, we here present results on individual Part-of-Speech (PoS) retrieval tasks. In Table 8, we report results for fine-grained verb retrieval (i.e. only retrieve the relevant verb/action in the video). We include the standard baselines and we additionally report the results obtained by a PoS-MMEN, that is a single embedding for verbs solely. We compare this to our proposed multi-embedding JPoSE. Using JPoSE produces better (or the same) results for both cross-modal and within-modal searches. Similarly, in Table 9, we compare results for fine-grained noun retrieval (i.e. only retrieve the relevant noun/object in the video). We show similar increases in mAP over crossmodal and within-modal searches. This indicates the complementary PoS information, from the other PoS embedding as well as the PoS-aware action embedding, helps to better define the individual embedding space. B. Closed vs Open Vocabulary Embedding C. Text embedding Using RNN We provide here the results of replacing the text embedding function, g, with an RNN instead of the two layer perceptron for the MMEN method. The RNN was modelled as a Gated Recurrent Unit (GRU). Captions were capped and zero-padded to a maximum length of 15 words. Adding a layer on top of the GRU proved not to be useful. Results of the RNN in the experiments are given under the name MMEN (Caption RNN). Given the singular verb and low noun count RNNs were not tested for the individual PoS-MMENs. Cross-Modal and Within-Modal Results can be seen in Tables 11 and 12 respectively. The inclusion of the RNN sees improvements in mAP performance for tv, vv and tt compared to MMEN (caption). However, compared to MMEN ([Verb,Noun]) or JPoSE (Verb,Noun) using the entire caption still leads to worse results for both cross and within modal retrieval. D. Additional MSR-VTT Experiments (Sec. 4.2) Table 13 of this supplementary is an expanded version of Table 7 in the main paper testing a variety of different combinations for PoS. For each row, an average of 10 runs is reported. This experiment also includes the removal of the NetVLAD layer in the MMEN, substituting it with mean pooling which we label as AVG. Results show that, on their own, Determinants, Adjectives and Adpositions achieve very poor results. We also report three JPoSE disentanglement options: (Verb, Noun), (Caption\Verb, Verb) and the one in the main paper (Capiton\Noun, Noun). The table shows that the best results are achieved when nouns are disentangled from the rest of the caption. Table 13. MSR-VTT Video-Caption Retrieval results using recall@k (R@k, higher is better) and median Rank (MR, lower is better). For each row, an average of 10 runs is reported. *We include results from [22], only available for Text-to-Video retrieval.
4,495
1908.03477
2968848930
We address the problem of cross-modal fine-grained action retrieval between text and video. Cross-modal retrieval is commonly achieved through learning a shared embedding space, that can indifferently embed modalities. In this paper, we propose to enrich the embedding by disentangling parts-of-speech (PoS) in the accompanying captions. We build a separate multi-modal embedding space for each PoS tag. The outputs of multiple PoS embeddings are then used as input to an integrated multi-modal space, where we perform action retrieval. All embeddings are trained jointly through a combination of PoS-aware and PoS-agnostic losses. Our proposal enables learning specialised embedding spaces that offer multiple views of the same embedded entities. We report the first retrieval results on fine-grained actions for the large-scale EPIC dataset, in a generalised zero-shot setting. Results show the advantage of our approach for both video-to-text and text-to-video action retrieval. We also demonstrate the benefit of disentangling the PoS for the generic task of cross-modal video retrieval on the MSR-VTT dataset.
Fine-grained action recognition Recently, several large-scale datasets have been published for the task of fine-grained action recognition @cite_17 @cite_15 @cite_23 @cite_36 @cite_24 . These generally focus on a closed vocabulary of class labels describing short and or specific actions.
{ "abstract": [ "Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 s, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community.", "Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them.", "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8 mAP, underscoring the need for developing new approaches for video understanding.", "Neural networks trained on datasets such as ImageNet have led to major advances in visual object classification. One obstacle that prevents networks from reasoning more deeply about complex scenes and situations, and from integrating visual knowledge with natural language, like humans do, is their lack of common sense knowledge about the physical world. Videos, unlike still images, contain a wealth of detailed information about the physical world. However, most labelled video datasets represent high-level concepts rather than detailed physical aspects about actions and scenes. In this work, we describe our ongoing collection of the “something-something” database of video prediction tasks whose solutions require a common sense understanding of the depicted situation. The database currently contains more than 100,000 videos across 174 classes, which are defined as caption-templates. We also describe the challenges in crowd-sourcing this data at scale.", "" ], "cite_N": [ "@cite_36", "@cite_24", "@cite_23", "@cite_15", "@cite_17" ], "mid": [ "2337252826", "2156798932", "2618799552", "2625366777", "2964242760" ] }
Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings
With the onset of the digital age, millions of hours of video are being recorded and searching this data is becoming a monumental task. It is even more tedious when searching shifts from video-level labels, such as 'dancing' or 'skiing', to short action segments like 'cracking eggs' or 'tightening a screw'. In this paper, we focus on the latter and refer to them as fine-grained actions. We thus explore the task of fine-grained action retrieval where both queries and retrieved results can be either a video sequence, or a textual caption describing the fine-grained action. Such free-form action descriptions allow for a more subtle characterisation of actions but require going beyond training a classifier on a predefined set of action labels [20,30]. As is common in cross-modal search tasks [26,36], we learn a shared embedding space onto which we project both videos and captions. By nature, fine-grained actions can Figure 1. We target fine-grained action retrieval. Action captions are broken using part-of-speech (PoS) parsing. We create separate embedding spaces for the relevant PoS (e.g. Noun or Verb) and then combine these embeddings into a shared embedding space for action retrieval (best viewed in colour). be described by an actor, an act and the list of objects involved in the interaction. We thus propose to learn a separate embedding for each part-of-speech (PoS), such as for instance verbs, nouns or adjectives. This is illustrated in Fig. 1 for two PoS (verbs and nouns). When embedding verbs solely, relevant entities are those that share the same verb/act regardless of the nouns/objects used. Conversely, for a PoS embedding focusing on nouns, different actions performed on the same object are considered relevant entities. This enables a PoS-aware embedding, specialised for retrieving a variety of relevant entities, given that PoS. The outputs from the multiple PoS embedding spaces are then combined within an encoding module that produces the final action embedding. We train our approach end-to-end, jointly optimising the multiple PoS embeddings and the final fine-grained action embedding. This approach has a number of advantages over training a single embedding space as is standardly done [7,8,15,22,24]. Firstly, this process builds different embeddings that can be seen as different views of the data, which contribute to the final goal in a collaborative manner. Secondly, it allows to inject, in a principled way, additional information but without requiring additional annotation, as parsing a caption for PoS is done automatically. Finally, when considering a single PoS at a time, for instance verbs, the cor-responding PoS-embedding learns to generalise across the variety of actions involving each verb (e.g. the many ways 'open' can be used). This generalisation is key to tackling more actions including new ones not seen during training. We present the first retrieval results for the recent largescale EPIC dataset [6] (Sec 4.1), utilising the released freeform narrations, previously unexplored for this dataset, as our supervision. Additionally, we show that our second contribution, learning PoS-aware embeddings, is also valuable for general video retrieval by reporting results on the MSR-VTT dataset [39] (Sec. 4.2). Method Our aim is to learn representations suitable for crossmodal search where the query modality is different from the target modality. Specifically, we use video sequences with textual captions/descriptions and perform video-to-text (vt) or text-to-video (tv) retrieval tasks. Additionally, we would like to make sure that classical search (where the query and the retrieved results have the same modalities) could still be performed in that representation space. The latter are referred to as video-to-video (vv) and text-to-text (tt) search tasks. As discussed in the previous section, several possibilities exist, the most common being embedding both modalities in a shared space such that, regardless of the modality, the representation of two relevant entities in that space are close to each other, while the representation of two nonrelevant entities are far apart. We first describe how to build such a joint embedding between two modalities, enforcing both cross-modal and within-modal constraints (Sec. 3.1). Then, based on the knowledge that different parts of the caption encode different aspects of an action, we describe how to leverage this information and build several disentangled Part of Speech embeddings (Sec. 3.2). Finally, we propose a unified representation well-suited for fine-grained action retrieval (Sec. 3.3). Multi-Modal Embedding Network (MMEN) This section describes a Multi-Modal Embedding Network (MMEN) that encodes the video sequence and the text caption into a common descriptor space. Let {(v i , t i )|v i ∈ V, t i ∈ T } be a set of videos with v i being the visual representation of the i th video sequence and t i the corresponding textual caption. Our aim is to learn two embedding functions f : V → Ω and g : T → Ω, such that f (v i ) and g(t i ) are close in the embedded space Ω. Note that f and g can be linear projection matrices or more complex functions e.g. deep neural networks. We denote the parameters of the embedding functions f and g by θ f and θ g respectively, and we learn them jointly with a weighted combination of two cross-modal (L v,t , L t,v ) and two within-modal (L v,v , L t,t ) triplet losses. Note that other point-wise, pairwise or list-wise losses can also be considered as alternatives to the triplet loss. The cross-modal losses are crucial to the task and en-sure that the representations of a query and a relevant item for that query from a different modality are closer than the representations of this query and a non-relevant item. We use cross-modal triplet losses [19,36]: L v,t (θ) = (i,j,k)∈Tv,t max γ + d(f vi , g tj ) − d(f vi , g t k ), 0 T v,t = {(i, j, k) | v i ∈ V, t j ∈ T i+ , t k ∈ T i− } (1) L t,v (θ) = (i,j,k)∈Tt,v max γ + d(g ti , f vj ) − d(g ti , f v k ), 0 T t,v = {(i, j, k) | t i ∈ T, v j ∈ V i+ , v k ∈ V i− }(2) where γ is a constant margin, θ = [θ f , θ g ], and d(.) is the distance function in the embedded space Ω. T i+ , T i− respectively define sets of relevant and non relevant captions and V i+ , V i− the sets of relevant and non relevant videos sequences for the multi-modal object (v i , t i ). To simplify the notation, f vi denotes f (v i ) ∈ Ω and g tj denotes g(t j ) ∈ Ω. Additionally, within-modal losses, also called structure preserving losses [19,36], ensure that the neighbourhood structure within each modality is preserved in the newly built joint embedding space. Formally, L v,v (θ) = (i,j,k)∈Tv,v max γ + d(f vi , f vj ) − d(f vi , f v k ), 0 T v,v = {(i, j, k) | v i ∈ V, v j ∈ V i+ , v k ∈ V i− } (3) L t,t (θ) = (i,j,k)∈Tt,t max γ + d(g ti , g tj ) − d(g ti , g t k ), 0 T t,t = {(i, j, k) | t i ∈ T, t j ∈ T i+ , t k ∈ T i− }(4) using the same notation as before. The final loss used for the MMEN network is a weighted combination of these four losses, summed over all triplets in T defined as follows: L(θ) = λ v,v L v,v + λ v,t L v,t + λ t,v L t,v + λ t,t L t,t (5) where λ is a weighting for each loss term. Disentangled Part of Speech Embeddings The previous section described the generic Multi-Modal Embedding Network (MMEN). In this section, we propose to disentangle different caption components so each component is encoded independently in its own embedding space. To do this, we first break down the text caption into different PoS tags. For example, the caption "I divided the onion into pieces using wooden spoon" can be divided into verbs, [divide, using], pronouns, [I], nouns, [onion, pieces, spoon] and adjectives, [wooden]. In our experiments, we focus on the most relevant ones for finegrained action recognition: verbs and nouns, but we explore other types for general video retrieval. We extract all words from a caption for a given PoS tag and train one MMEN to only embed these words and the video representation in the same space. We refer to it as a PoS-MMEN. To train a PoS-MMEN, we propose to adapt the notion of relevance specifically to the PoS. This has a direct impact on the sets V i+ , V i− , T i+ , T i− defined in Equations (1)-(4). For example, the caption 'cut tomato' is disentangled into the verb 'cut' and the noun 'tomato'. Consider a PoS-MMEN focusing on verb tags solely. The caption 'cut carrots' is a relevant caption as the pair share the same verb 'cut'. In another PoS-MMEN focusing on noun tags solely, the two remain irrelevant. As the relevant/irrelevant sets differ within each PoS-MMEN, these embeddings specialise to that PoS. It is important to note that, although the same visual features are used as input for all PoS-MMEN, the fact that we build one embedding space per PoS trains multiple visual embedding functions f k that can be seen as multiple views of the video sequence. PoS-Aware Unified Action Embedding The previous section describes how to extract different PoS from captions and how to build PoS-specific MMENs. These PoS-MMENs can already be used alone for PoSspecific retrieval tasks, for instance a verb-retrieval task (e.g. retrieve all videos where "cut" is relevant) or a nounretrieval task. 1 More importantly, the output of different PoS-MMENs can be combined to perform more complex tasks, including the one we are interested in, namely finegrained action retrieval. Let us denote the k th PoS-MMEN visual and textual embedding functions by f k : V → Ω k and g k : T → Ω k . We define:v i = e v (f 1 vi , f 2 vi , . . . , f K vi ) t i = e t (g 1 t 1 i , g 2 t 2 i , . . . , g K t K i )(6) where e v and e t are encoding functions that combine the outputs of the PoS-MMENs. We explore multiple pooling functions for e v and e t : concatenation, max, average -the latter two assume all Ω k share the same dimensionality. Whenv i ,t i have the same dimension, we can perform action retrieval by directly computing the distance between these representations. We instead propose to train a final PoS-agnostic MMEN that unifies the representation, leading to our final JPoSE model. Joint Part of Speech Embedding (JPoSE). Considering the PoS-aware representationsv i andt i as input and, still following our learning to rank approach, we learn the parametersθf andθĝ of the two embedding functionŝ f :V → Γ andĝ :T → Γ which project in our final embedding space Γ. We again consider this as the task of building a single MMEN with the inputsv i andt i , and follow the process described in Sec. 3.1. In other words, we train using the loss defined in Equation (5), which we denotê L here, which combines two cross-modal and two withinmodal losses using the triplets T v,t , T t,v , T v,v , T t,t formed using relevance between videos and captions based on the action retrieval task. As relevance here is not PoS-aware, we refer to this loss as PoS-agnostic. This is illustrated in Fig. 2. We learn the multiple PoS-MMENs and the final MMEN jointly with the following combined loss: L(θ, θ 1 , . . . θ K ) =L(θ) + K k=1 α k L k (θ k )(7) where α k are weighting factors,L is the PoS-agnostic loss described above and L k are the PoS-aware losses corresponding to the K PoS-MMENs. Experiments We first tackle fine-grained action retrieval on the EPIC dataset [6] (Sec. 4.1) and then the general video retrieval task on the MSR-VTT dataset [39] (Sec. 4.2). This allows us to explore two different tasks using the proposed multimodal embeddings. The large English spaCy parser [1] was used to find the Part Of Speech (PoS) tags and disentangle them in the captions of both datasets. Statistics on the most frequent PoS tags are shown in Table 1. As these statistics show, EPIC contains mainly nouns and verbs, while MSR-VTT has longer captions and more nouns. This will have an impact of the PoS chosen for each dataset when building the JPoSE model. Fine-Grained Action Retrieval on EPIC Dataset. The EPIC dataset [6] is an egocentric dataset with 32 participants cooking in their own kitchens who then narrated the actions in their native language. The narrations were translated to English but maintain the open vocabulary selected by the participants. We employ the released free-form narrations to use this dataset for fine-grained action retrieval. We follow the provided train/test splits. Note that by construction there are two test sets: Seen and Unseen, referring to whether the kitchen has been seen in the training set. We follow the terminology from [6], and note that this terminology should not be confused with the zeroshot literature which distinguishes seen/unseen classes. The actual sequences are strictly disjoint between all sets. Additionally, we train only on the many-shot examples from EPIC excluding all examples of the few shot classes from the training set. This ensures each action has more than 100 relevant videos during training and increases the number of zero-shot examples in both test sets. Building relevance sets for retrieval. The EPIC dataset offers an opportunity for fine-grained action retrieval, as the open vocabulary has been grouped into semantically relevant verb and noun classes for the action recognition challenge. For example, 'put', 'place' and 'put-down' are grouped into one class. As far as we are aware, this paper presents the first attempt to use the open vocabulary narrations released to the community. We determine retrieval relevance scores from these semantically grouped verb and noun classes 2 , defined in [6]. These indicate which videos and captions should be considered related to each other. Following these semantic groups, a query 'put mug' and a video with 'place cup' in its caption are considered relevant as 'place' and 'put' share the same verb class and 'mug' and 'cup' share the same noun class. Subsequently, we define the triplets T v,t , T t,v , T v,v , T t,t used to train the MMEN models and to compute the lossL in JPoSE. When training a PoS-MMEN, two videos are considered relevant only within that PoS. Accordingly, 'put onion' and 'put mug' are relevant for verb retrieval, whereas, 'put cup' and 'take mug' are for noun retrieval. The corresponding PoS-based relevances define the triplets T k for L k . Experimental Details Video features. We extract flow and appearance features using the TSN BNInception model [37] Text features. We map each lemmatised word to its feature vector using a 100-dimension Word2Vec model, trained on the Wikipedia corpus. Multiple word vectors with the same part of speech were aggregated by averaging. We also experimented with the pre-trained 300-dimension Glove model, and found the results to be similar. Architecture details. We implement f k and g k in each MMEN as a 2 layer perceptron (fully connected layers) with ReLU. Additionally, the input vectors and output vectors are L2 normalised. In all cases, we set the dimension of the embedding space to 256, a dimension we found to be suitable across all settings. We use a single layer perceptron with shared weights forf andĝ that we initialise with PCA. Training details. The triplet weighting parameters are set to λ v,v = λ t,t = 0.1 and λ v,t = λ t,v = 1.0 and the loss weightings α k are set to 1. The embedding models were implemented in Python using the Tensorflow library. We trained the models with an Adam solver and a learning rate of 1e −5 , considering batch sizes of 256, where for each query we sample 100 random triplets from the corresponding T v,t , T t,v , T v,v , T t,t sets. The training in general converges after a few thousand iterations, we report all results after 4000 iterations. Evaluation metrics. We report mean average precision (mAP), i.e. for each query we consider the average precision over all relevant elements and take the mean over all queries. We consider each element in the test set as a query in turns. When reporting within-modal retrieval mAP, the corresponding item (video or caption) is removed from the test set for that query. Results First, we consider cross-modal and within-modal finegrained action retrieval. Then, we present an ablation study as well as qualitative results to get more insights. Finally we show that our approach is well-suited for zero-shot settings. These models are also compared to standard baselines. The Random Baseline randomly ranks all the database items, providing a lower bound on the mAP scores. The CCA-baseline applies Canonical Correlation Analysis to both modalities v i and t i to find a joint embedding space for cross-modal retrieval [9]. Finally, Features (Word2Vec) and Features (Video), which are only defined for withinmodal retrieval (i.e. vv and tt), show the performance when we directly use the video representation v i or the averaged Word2Vec caption representation t i . Cross-modal retrieval. Table 11 presents cross-modal results for fine-grained action retrieval. The main observation is that the proposed JPoSE outperforms all the MMEN variants and the baselines for both video-to-text (vt) and textto-video retrieval (tv), on both test sets. We also note that MMEN ([Verb, Noun]) outperforms other MMEN variants, showing the benefit of learning specialised embeddings. Yet the full JPoSE is crucial to get the best results. Within-modal retrieval. Table 12 shows the withinmodal retrieval results for both text-to-text (tt) and videoto-video (vv) retrieval. Again, JPoSE outperforms all the flavours of MMEN on both test sets. This shows that by learning a cross-modal embedding we inject information from the other modality that helps to better disambiguate and hence to improve the search. Ablation study. We evaluate the role of the components of the proposed JPoSE model, for both cross-modal and within-modal retrieval. Table 4 reports results comparing different options for the encoding functions e v and e t in addition to learning the model jointly both with and without learned functionsf andĝ. This confirms that the proposed approach is the best option. In the supplementary material, we also compare the performance when using the closed vocabulary classes from EPIC to learn the embedding. Table 5 shows the zero-shot (ZS) counts in both test sets. In total 12% of the videos in both test sets are zero-shot instances. We separate cases where the noun is present in the training set but the verb is not, denoted by ZSV (zero-shot verb), from ZSN (zero-shot noun) where the verb is present but not the noun. Cross-modal ZS retrieval results for this interesting setting are shown in Table 6. We compare JPoSE to MMEN (Caption) and baselines. Results show that the proposed JPoSE model clearly improves over these zero-shot settings, thanks to the different views captured by the multiple PoS embeddings, specialised to acts and objects. Qualitative results. Fig. 3 illustrates both video-to-text and text-to-video retrieval. For several queries, it shows the relevance of the top-50 retrieved items (relevant in green, nonrelevant in grey). Fig. 4 illustrates our motivation that disentangling PoS embeddings would learn different visual functions. It presents maximum activation examples on chosen neurons within f i for both verb and noun embeddings. Each cluster represents the 9 videos that respond maximally to one of these neurons 3 . We can remark that noun activations indeed correspond to objects of shared appearance occurring in different actions (in the figure, chopping boards in one and cutlery in the second), while verb embedding neuron General Video Retrieval on MSR-VTT Dataset. We select MSR-VTT [39] as a public dataset for general video retrieval. Originally used for video captioning, this large-scale video understanding dataset is increasingly evaluated for video-to-text and text-to-video retrieval [8,22,24,41,23]. We follow the code and setup of [22] using the same train/test split that includes 7,656 training videos each with 20 different captions describing the scene and 1000 test videos with one caption per video. We also follow the evaluation protocol in [22] and compute recall@k (R@K) and median rank (MR). In contrast to the EPIC dataset, there is no semantic groupings of the captions in MSR-VTT. Each caption is considered relevant only for a single video, and two captions describing different videos are considered irrelevant even if they share semantic similarities. Furthermore, disentangling captions yields further semantic similarities. For example, "A cooking tutorial" and "A person is cooking", for a verb-MMEN, will be considered irrelevant as they belong to different videos even though they share the same single verb 'cook'. Consequently, we can not directly apply JPoSE as proposed in Sec. 3.3. Instead, we adapt JPoSE to this problem as follows. We use the Mixture-of-Expert Embeddings (MEE) model from [22], as our core MMEN network. In Table 7. MSR-VTT Video-Caption Retrieval results. *We include results from [22], only available for Text-to-Video retrieval. fact, MEE is a form of multi-modal embedding network in that it embeds videos and captions into the same space. We instead focus on assessing whether disentangling PoS and learning multiple PoS-aware embeddings produce better results. In this adapted JPoSE we encode the output of the disentangled PoS-MMENs with e v and e t (i.e. concatenated) and use NetVLAD [3] to aggregate Word2Vec representations. Instead of the combined loss in Equation (7), we use the pair loss, used also in [22]: L(θ) = 1 B B i j =i max γ + d(f vi , g ti ) − d(f vi , g tj ), 0 + max γ + d(f vi , g ti ) − d(f vj , g ti ), 0(8) This same loss is used when we train different MMENs. Visual and text features. We use appearance, flow, audio and facial pre-extracted visual features provided from [22]. For the captions, we extract the encodings ourselves 4 using the same Word2Vec model as for EPIC. Results. We report on video-to-text and text-to-video retrieval on MSR-VTT in Table 7 for the standard baselines and several MMEN variants. Comparing MMENs, we note that nouns are much more informative than verbs for this retrieval task. MMEN results with other PoS tags (shown in the supplementary) are even lower, indicating that they are 4 Note that this explains the difference between the results reported in [22] (shown in the first row of the Table 7) and MMEN (Caption). not informative alone. Building on these findings, we report results of a JPoSE combining two MMENs, one for nouns, and one for the remainder of the caption (Caption\Noun). Our adapted JPoSE model consistently outperforms fullcaption single embedding for both video-to-text and textto-video retrieval. We report other PoS disentanglement results in supplementary material. Qualitative results. Figure 5 shows qualitative results comparing using the full caption and JPoSE noting the disentangled model's ability to commonly rank videos closer to their corresponding captions. Conclusion We have proposed a method for fine-grained action retrieval. By learning distinct embeddings for each PoS, our model is able to combine these in a principal manner and to create a space suitable for action retrieval, outperforming approaches which learn such a space through captions alone. We tested our method on a fine-grained action retrieval dataset, EPIC, using the open vocabulary labels. Our results demonstrate the ability for the method to generalise to zero-shot cases. Additionally, we show the applicability of the notion of disentangling the caption for the general video-retrieval task on MSR-VTT. Table 9. Noun retrieval task results on the seen test set of EPIC-Kitchens. Supplementary Material A. Individual Part-of-Speech Retrieval (Sec. 3 .3) In the main manuscript, we report results on the task of fine-grained action retrieval. For completion, we here present results on individual Part-of-Speech (PoS) retrieval tasks. In Table 8, we report results for fine-grained verb retrieval (i.e. only retrieve the relevant verb/action in the video). We include the standard baselines and we additionally report the results obtained by a PoS-MMEN, that is a single embedding for verbs solely. We compare this to our proposed multi-embedding JPoSE. Using JPoSE produces better (or the same) results for both cross-modal and within-modal searches. Similarly, in Table 9, we compare results for fine-grained noun retrieval (i.e. only retrieve the relevant noun/object in the video). We show similar increases in mAP over crossmodal and within-modal searches. This indicates the complementary PoS information, from the other PoS embedding as well as the PoS-aware action embedding, helps to better define the individual embedding space. B. Closed vs Open Vocabulary Embedding C. Text embedding Using RNN We provide here the results of replacing the text embedding function, g, with an RNN instead of the two layer perceptron for the MMEN method. The RNN was modelled as a Gated Recurrent Unit (GRU). Captions were capped and zero-padded to a maximum length of 15 words. Adding a layer on top of the GRU proved not to be useful. Results of the RNN in the experiments are given under the name MMEN (Caption RNN). Given the singular verb and low noun count RNNs were not tested for the individual PoS-MMENs. Cross-Modal and Within-Modal Results can be seen in Tables 11 and 12 respectively. The inclusion of the RNN sees improvements in mAP performance for tv, vv and tt compared to MMEN (caption). However, compared to MMEN ([Verb,Noun]) or JPoSE (Verb,Noun) using the entire caption still leads to worse results for both cross and within modal retrieval. D. Additional MSR-VTT Experiments (Sec. 4.2) Table 13 of this supplementary is an expanded version of Table 7 in the main paper testing a variety of different combinations for PoS. For each row, an average of 10 runs is reported. This experiment also includes the removal of the NetVLAD layer in the MMEN, substituting it with mean pooling which we label as AVG. Results show that, on their own, Determinants, Adjectives and Adpositions achieve very poor results. We also report three JPoSE disentanglement options: (Verb, Noun), (Caption\Verb, Verb) and the one in the main paper (Capiton\Noun, Noun). The table shows that the best results are achieved when nouns are disentangled from the rest of the caption. Table 13. MSR-VTT Video-Caption Retrieval results using recall@k (R@k, higher is better) and median Rank (MR, lower is better). For each row, an average of 10 runs is reported. *We include results from [22], only available for Text-to-Video retrieval.
4,495
1908.03477
2968848930
We address the problem of cross-modal fine-grained action retrieval between text and video. Cross-modal retrieval is commonly achieved through learning a shared embedding space, that can indifferently embed modalities. In this paper, we propose to enrich the embedding by disentangling parts-of-speech (PoS) in the accompanying captions. We build a separate multi-modal embedding space for each PoS tag. The outputs of multiple PoS embeddings are then used as input to an integrated multi-modal space, where we perform action retrieval. All embeddings are trained jointly through a combination of PoS-aware and PoS-agnostic losses. Our proposal enables learning specialised embedding spaces that offer multiple views of the same embedded entities. We report the first retrieval results on fine-grained actions for the large-scale EPIC dataset, in a generalised zero-shot setting. Results show the advantage of our approach for both video-to-text and text-to-video action retrieval. We also demonstrate the benefit of disentangling the PoS for the generic task of cross-modal video retrieval on the MSR-VTT dataset.
Rohrbach al @cite_24 investigate hand and pose estimation techniques for fine-grained activity recognition. By compositing separate actions, and treating them as attributes, they can predict unseen activities via novel combinations of seen actions. Mahdisoltani al @cite_9 train for four different tasks, including both coarse and fine grain action recognition. They conclude that training on fine-grain labels allows for better learning of features for coarse-grain tasks.
{ "abstract": [ "Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them.", "We describe a DNN for video classification and captioning, trained end-to-end, with shared features, to solve tasks at different levels of granularity, exploring the link between granularity in a source task and the quality of learned features for transfer learning. For solving the new task domain in transfer learning, we freeze the trained encoder and fine-tune a neural net on the target domain. We train on the Something-Something dataset with over 220, 000 videos, and multiple levels of target granularity, including 50 action groups, 174 fine-grained action categories and captions. Classification and captioning with Something-Something are challenging because of the subtle differences between actions, applied to thousands of different object classes, and the diversity of captions penned by crowd actors. Our model performs better than existing classification baselines for SomethingSomething, with impressive fine-grained results. And it yields a strong baseline on the new Something-Something captioning task. Experiments reveal that training with more fine-grained tasks tends to produce better features for transfer learning." ], "cite_N": [ "@cite_24", "@cite_9" ], "mid": [ "2156798932", "2902709482" ] }
Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings
With the onset of the digital age, millions of hours of video are being recorded and searching this data is becoming a monumental task. It is even more tedious when searching shifts from video-level labels, such as 'dancing' or 'skiing', to short action segments like 'cracking eggs' or 'tightening a screw'. In this paper, we focus on the latter and refer to them as fine-grained actions. We thus explore the task of fine-grained action retrieval where both queries and retrieved results can be either a video sequence, or a textual caption describing the fine-grained action. Such free-form action descriptions allow for a more subtle characterisation of actions but require going beyond training a classifier on a predefined set of action labels [20,30]. As is common in cross-modal search tasks [26,36], we learn a shared embedding space onto which we project both videos and captions. By nature, fine-grained actions can Figure 1. We target fine-grained action retrieval. Action captions are broken using part-of-speech (PoS) parsing. We create separate embedding spaces for the relevant PoS (e.g. Noun or Verb) and then combine these embeddings into a shared embedding space for action retrieval (best viewed in colour). be described by an actor, an act and the list of objects involved in the interaction. We thus propose to learn a separate embedding for each part-of-speech (PoS), such as for instance verbs, nouns or adjectives. This is illustrated in Fig. 1 for two PoS (verbs and nouns). When embedding verbs solely, relevant entities are those that share the same verb/act regardless of the nouns/objects used. Conversely, for a PoS embedding focusing on nouns, different actions performed on the same object are considered relevant entities. This enables a PoS-aware embedding, specialised for retrieving a variety of relevant entities, given that PoS. The outputs from the multiple PoS embedding spaces are then combined within an encoding module that produces the final action embedding. We train our approach end-to-end, jointly optimising the multiple PoS embeddings and the final fine-grained action embedding. This approach has a number of advantages over training a single embedding space as is standardly done [7,8,15,22,24]. Firstly, this process builds different embeddings that can be seen as different views of the data, which contribute to the final goal in a collaborative manner. Secondly, it allows to inject, in a principled way, additional information but without requiring additional annotation, as parsing a caption for PoS is done automatically. Finally, when considering a single PoS at a time, for instance verbs, the cor-responding PoS-embedding learns to generalise across the variety of actions involving each verb (e.g. the many ways 'open' can be used). This generalisation is key to tackling more actions including new ones not seen during training. We present the first retrieval results for the recent largescale EPIC dataset [6] (Sec 4.1), utilising the released freeform narrations, previously unexplored for this dataset, as our supervision. Additionally, we show that our second contribution, learning PoS-aware embeddings, is also valuable for general video retrieval by reporting results on the MSR-VTT dataset [39] (Sec. 4.2). Method Our aim is to learn representations suitable for crossmodal search where the query modality is different from the target modality. Specifically, we use video sequences with textual captions/descriptions and perform video-to-text (vt) or text-to-video (tv) retrieval tasks. Additionally, we would like to make sure that classical search (where the query and the retrieved results have the same modalities) could still be performed in that representation space. The latter are referred to as video-to-video (vv) and text-to-text (tt) search tasks. As discussed in the previous section, several possibilities exist, the most common being embedding both modalities in a shared space such that, regardless of the modality, the representation of two relevant entities in that space are close to each other, while the representation of two nonrelevant entities are far apart. We first describe how to build such a joint embedding between two modalities, enforcing both cross-modal and within-modal constraints (Sec. 3.1). Then, based on the knowledge that different parts of the caption encode different aspects of an action, we describe how to leverage this information and build several disentangled Part of Speech embeddings (Sec. 3.2). Finally, we propose a unified representation well-suited for fine-grained action retrieval (Sec. 3.3). Multi-Modal Embedding Network (MMEN) This section describes a Multi-Modal Embedding Network (MMEN) that encodes the video sequence and the text caption into a common descriptor space. Let {(v i , t i )|v i ∈ V, t i ∈ T } be a set of videos with v i being the visual representation of the i th video sequence and t i the corresponding textual caption. Our aim is to learn two embedding functions f : V → Ω and g : T → Ω, such that f (v i ) and g(t i ) are close in the embedded space Ω. Note that f and g can be linear projection matrices or more complex functions e.g. deep neural networks. We denote the parameters of the embedding functions f and g by θ f and θ g respectively, and we learn them jointly with a weighted combination of two cross-modal (L v,t , L t,v ) and two within-modal (L v,v , L t,t ) triplet losses. Note that other point-wise, pairwise or list-wise losses can also be considered as alternatives to the triplet loss. The cross-modal losses are crucial to the task and en-sure that the representations of a query and a relevant item for that query from a different modality are closer than the representations of this query and a non-relevant item. We use cross-modal triplet losses [19,36]: L v,t (θ) = (i,j,k)∈Tv,t max γ + d(f vi , g tj ) − d(f vi , g t k ), 0 T v,t = {(i, j, k) | v i ∈ V, t j ∈ T i+ , t k ∈ T i− } (1) L t,v (θ) = (i,j,k)∈Tt,v max γ + d(g ti , f vj ) − d(g ti , f v k ), 0 T t,v = {(i, j, k) | t i ∈ T, v j ∈ V i+ , v k ∈ V i− }(2) where γ is a constant margin, θ = [θ f , θ g ], and d(.) is the distance function in the embedded space Ω. T i+ , T i− respectively define sets of relevant and non relevant captions and V i+ , V i− the sets of relevant and non relevant videos sequences for the multi-modal object (v i , t i ). To simplify the notation, f vi denotes f (v i ) ∈ Ω and g tj denotes g(t j ) ∈ Ω. Additionally, within-modal losses, also called structure preserving losses [19,36], ensure that the neighbourhood structure within each modality is preserved in the newly built joint embedding space. Formally, L v,v (θ) = (i,j,k)∈Tv,v max γ + d(f vi , f vj ) − d(f vi , f v k ), 0 T v,v = {(i, j, k) | v i ∈ V, v j ∈ V i+ , v k ∈ V i− } (3) L t,t (θ) = (i,j,k)∈Tt,t max γ + d(g ti , g tj ) − d(g ti , g t k ), 0 T t,t = {(i, j, k) | t i ∈ T, t j ∈ T i+ , t k ∈ T i− }(4) using the same notation as before. The final loss used for the MMEN network is a weighted combination of these four losses, summed over all triplets in T defined as follows: L(θ) = λ v,v L v,v + λ v,t L v,t + λ t,v L t,v + λ t,t L t,t (5) where λ is a weighting for each loss term. Disentangled Part of Speech Embeddings The previous section described the generic Multi-Modal Embedding Network (MMEN). In this section, we propose to disentangle different caption components so each component is encoded independently in its own embedding space. To do this, we first break down the text caption into different PoS tags. For example, the caption "I divided the onion into pieces using wooden spoon" can be divided into verbs, [divide, using], pronouns, [I], nouns, [onion, pieces, spoon] and adjectives, [wooden]. In our experiments, we focus on the most relevant ones for finegrained action recognition: verbs and nouns, but we explore other types for general video retrieval. We extract all words from a caption for a given PoS tag and train one MMEN to only embed these words and the video representation in the same space. We refer to it as a PoS-MMEN. To train a PoS-MMEN, we propose to adapt the notion of relevance specifically to the PoS. This has a direct impact on the sets V i+ , V i− , T i+ , T i− defined in Equations (1)-(4). For example, the caption 'cut tomato' is disentangled into the verb 'cut' and the noun 'tomato'. Consider a PoS-MMEN focusing on verb tags solely. The caption 'cut carrots' is a relevant caption as the pair share the same verb 'cut'. In another PoS-MMEN focusing on noun tags solely, the two remain irrelevant. As the relevant/irrelevant sets differ within each PoS-MMEN, these embeddings specialise to that PoS. It is important to note that, although the same visual features are used as input for all PoS-MMEN, the fact that we build one embedding space per PoS trains multiple visual embedding functions f k that can be seen as multiple views of the video sequence. PoS-Aware Unified Action Embedding The previous section describes how to extract different PoS from captions and how to build PoS-specific MMENs. These PoS-MMENs can already be used alone for PoSspecific retrieval tasks, for instance a verb-retrieval task (e.g. retrieve all videos where "cut" is relevant) or a nounretrieval task. 1 More importantly, the output of different PoS-MMENs can be combined to perform more complex tasks, including the one we are interested in, namely finegrained action retrieval. Let us denote the k th PoS-MMEN visual and textual embedding functions by f k : V → Ω k and g k : T → Ω k . We define:v i = e v (f 1 vi , f 2 vi , . . . , f K vi ) t i = e t (g 1 t 1 i , g 2 t 2 i , . . . , g K t K i )(6) where e v and e t are encoding functions that combine the outputs of the PoS-MMENs. We explore multiple pooling functions for e v and e t : concatenation, max, average -the latter two assume all Ω k share the same dimensionality. Whenv i ,t i have the same dimension, we can perform action retrieval by directly computing the distance between these representations. We instead propose to train a final PoS-agnostic MMEN that unifies the representation, leading to our final JPoSE model. Joint Part of Speech Embedding (JPoSE). Considering the PoS-aware representationsv i andt i as input and, still following our learning to rank approach, we learn the parametersθf andθĝ of the two embedding functionŝ f :V → Γ andĝ :T → Γ which project in our final embedding space Γ. We again consider this as the task of building a single MMEN with the inputsv i andt i , and follow the process described in Sec. 3.1. In other words, we train using the loss defined in Equation (5), which we denotê L here, which combines two cross-modal and two withinmodal losses using the triplets T v,t , T t,v , T v,v , T t,t formed using relevance between videos and captions based on the action retrieval task. As relevance here is not PoS-aware, we refer to this loss as PoS-agnostic. This is illustrated in Fig. 2. We learn the multiple PoS-MMENs and the final MMEN jointly with the following combined loss: L(θ, θ 1 , . . . θ K ) =L(θ) + K k=1 α k L k (θ k )(7) where α k are weighting factors,L is the PoS-agnostic loss described above and L k are the PoS-aware losses corresponding to the K PoS-MMENs. Experiments We first tackle fine-grained action retrieval on the EPIC dataset [6] (Sec. 4.1) and then the general video retrieval task on the MSR-VTT dataset [39] (Sec. 4.2). This allows us to explore two different tasks using the proposed multimodal embeddings. The large English spaCy parser [1] was used to find the Part Of Speech (PoS) tags and disentangle them in the captions of both datasets. Statistics on the most frequent PoS tags are shown in Table 1. As these statistics show, EPIC contains mainly nouns and verbs, while MSR-VTT has longer captions and more nouns. This will have an impact of the PoS chosen for each dataset when building the JPoSE model. Fine-Grained Action Retrieval on EPIC Dataset. The EPIC dataset [6] is an egocentric dataset with 32 participants cooking in their own kitchens who then narrated the actions in their native language. The narrations were translated to English but maintain the open vocabulary selected by the participants. We employ the released free-form narrations to use this dataset for fine-grained action retrieval. We follow the provided train/test splits. Note that by construction there are two test sets: Seen and Unseen, referring to whether the kitchen has been seen in the training set. We follow the terminology from [6], and note that this terminology should not be confused with the zeroshot literature which distinguishes seen/unseen classes. The actual sequences are strictly disjoint between all sets. Additionally, we train only on the many-shot examples from EPIC excluding all examples of the few shot classes from the training set. This ensures each action has more than 100 relevant videos during training and increases the number of zero-shot examples in both test sets. Building relevance sets for retrieval. The EPIC dataset offers an opportunity for fine-grained action retrieval, as the open vocabulary has been grouped into semantically relevant verb and noun classes for the action recognition challenge. For example, 'put', 'place' and 'put-down' are grouped into one class. As far as we are aware, this paper presents the first attempt to use the open vocabulary narrations released to the community. We determine retrieval relevance scores from these semantically grouped verb and noun classes 2 , defined in [6]. These indicate which videos and captions should be considered related to each other. Following these semantic groups, a query 'put mug' and a video with 'place cup' in its caption are considered relevant as 'place' and 'put' share the same verb class and 'mug' and 'cup' share the same noun class. Subsequently, we define the triplets T v,t , T t,v , T v,v , T t,t used to train the MMEN models and to compute the lossL in JPoSE. When training a PoS-MMEN, two videos are considered relevant only within that PoS. Accordingly, 'put onion' and 'put mug' are relevant for verb retrieval, whereas, 'put cup' and 'take mug' are for noun retrieval. The corresponding PoS-based relevances define the triplets T k for L k . Experimental Details Video features. We extract flow and appearance features using the TSN BNInception model [37] Text features. We map each lemmatised word to its feature vector using a 100-dimension Word2Vec model, trained on the Wikipedia corpus. Multiple word vectors with the same part of speech were aggregated by averaging. We also experimented with the pre-trained 300-dimension Glove model, and found the results to be similar. Architecture details. We implement f k and g k in each MMEN as a 2 layer perceptron (fully connected layers) with ReLU. Additionally, the input vectors and output vectors are L2 normalised. In all cases, we set the dimension of the embedding space to 256, a dimension we found to be suitable across all settings. We use a single layer perceptron with shared weights forf andĝ that we initialise with PCA. Training details. The triplet weighting parameters are set to λ v,v = λ t,t = 0.1 and λ v,t = λ t,v = 1.0 and the loss weightings α k are set to 1. The embedding models were implemented in Python using the Tensorflow library. We trained the models with an Adam solver and a learning rate of 1e −5 , considering batch sizes of 256, where for each query we sample 100 random triplets from the corresponding T v,t , T t,v , T v,v , T t,t sets. The training in general converges after a few thousand iterations, we report all results after 4000 iterations. Evaluation metrics. We report mean average precision (mAP), i.e. for each query we consider the average precision over all relevant elements and take the mean over all queries. We consider each element in the test set as a query in turns. When reporting within-modal retrieval mAP, the corresponding item (video or caption) is removed from the test set for that query. Results First, we consider cross-modal and within-modal finegrained action retrieval. Then, we present an ablation study as well as qualitative results to get more insights. Finally we show that our approach is well-suited for zero-shot settings. These models are also compared to standard baselines. The Random Baseline randomly ranks all the database items, providing a lower bound on the mAP scores. The CCA-baseline applies Canonical Correlation Analysis to both modalities v i and t i to find a joint embedding space for cross-modal retrieval [9]. Finally, Features (Word2Vec) and Features (Video), which are only defined for withinmodal retrieval (i.e. vv and tt), show the performance when we directly use the video representation v i or the averaged Word2Vec caption representation t i . Cross-modal retrieval. Table 11 presents cross-modal results for fine-grained action retrieval. The main observation is that the proposed JPoSE outperforms all the MMEN variants and the baselines for both video-to-text (vt) and textto-video retrieval (tv), on both test sets. We also note that MMEN ([Verb, Noun]) outperforms other MMEN variants, showing the benefit of learning specialised embeddings. Yet the full JPoSE is crucial to get the best results. Within-modal retrieval. Table 12 shows the withinmodal retrieval results for both text-to-text (tt) and videoto-video (vv) retrieval. Again, JPoSE outperforms all the flavours of MMEN on both test sets. This shows that by learning a cross-modal embedding we inject information from the other modality that helps to better disambiguate and hence to improve the search. Ablation study. We evaluate the role of the components of the proposed JPoSE model, for both cross-modal and within-modal retrieval. Table 4 reports results comparing different options for the encoding functions e v and e t in addition to learning the model jointly both with and without learned functionsf andĝ. This confirms that the proposed approach is the best option. In the supplementary material, we also compare the performance when using the closed vocabulary classes from EPIC to learn the embedding. Table 5 shows the zero-shot (ZS) counts in both test sets. In total 12% of the videos in both test sets are zero-shot instances. We separate cases where the noun is present in the training set but the verb is not, denoted by ZSV (zero-shot verb), from ZSN (zero-shot noun) where the verb is present but not the noun. Cross-modal ZS retrieval results for this interesting setting are shown in Table 6. We compare JPoSE to MMEN (Caption) and baselines. Results show that the proposed JPoSE model clearly improves over these zero-shot settings, thanks to the different views captured by the multiple PoS embeddings, specialised to acts and objects. Qualitative results. Fig. 3 illustrates both video-to-text and text-to-video retrieval. For several queries, it shows the relevance of the top-50 retrieved items (relevant in green, nonrelevant in grey). Fig. 4 illustrates our motivation that disentangling PoS embeddings would learn different visual functions. It presents maximum activation examples on chosen neurons within f i for both verb and noun embeddings. Each cluster represents the 9 videos that respond maximally to one of these neurons 3 . We can remark that noun activations indeed correspond to objects of shared appearance occurring in different actions (in the figure, chopping boards in one and cutlery in the second), while verb embedding neuron General Video Retrieval on MSR-VTT Dataset. We select MSR-VTT [39] as a public dataset for general video retrieval. Originally used for video captioning, this large-scale video understanding dataset is increasingly evaluated for video-to-text and text-to-video retrieval [8,22,24,41,23]. We follow the code and setup of [22] using the same train/test split that includes 7,656 training videos each with 20 different captions describing the scene and 1000 test videos with one caption per video. We also follow the evaluation protocol in [22] and compute recall@k (R@K) and median rank (MR). In contrast to the EPIC dataset, there is no semantic groupings of the captions in MSR-VTT. Each caption is considered relevant only for a single video, and two captions describing different videos are considered irrelevant even if they share semantic similarities. Furthermore, disentangling captions yields further semantic similarities. For example, "A cooking tutorial" and "A person is cooking", for a verb-MMEN, will be considered irrelevant as they belong to different videos even though they share the same single verb 'cook'. Consequently, we can not directly apply JPoSE as proposed in Sec. 3.3. Instead, we adapt JPoSE to this problem as follows. We use the Mixture-of-Expert Embeddings (MEE) model from [22], as our core MMEN network. In Table 7. MSR-VTT Video-Caption Retrieval results. *We include results from [22], only available for Text-to-Video retrieval. fact, MEE is a form of multi-modal embedding network in that it embeds videos and captions into the same space. We instead focus on assessing whether disentangling PoS and learning multiple PoS-aware embeddings produce better results. In this adapted JPoSE we encode the output of the disentangled PoS-MMENs with e v and e t (i.e. concatenated) and use NetVLAD [3] to aggregate Word2Vec representations. Instead of the combined loss in Equation (7), we use the pair loss, used also in [22]: L(θ) = 1 B B i j =i max γ + d(f vi , g ti ) − d(f vi , g tj ), 0 + max γ + d(f vi , g ti ) − d(f vj , g ti ), 0(8) This same loss is used when we train different MMENs. Visual and text features. We use appearance, flow, audio and facial pre-extracted visual features provided from [22]. For the captions, we extract the encodings ourselves 4 using the same Word2Vec model as for EPIC. Results. We report on video-to-text and text-to-video retrieval on MSR-VTT in Table 7 for the standard baselines and several MMEN variants. Comparing MMENs, we note that nouns are much more informative than verbs for this retrieval task. MMEN results with other PoS tags (shown in the supplementary) are even lower, indicating that they are 4 Note that this explains the difference between the results reported in [22] (shown in the first row of the Table 7) and MMEN (Caption). not informative alone. Building on these findings, we report results of a JPoSE combining two MMENs, one for nouns, and one for the remainder of the caption (Caption\Noun). Our adapted JPoSE model consistently outperforms fullcaption single embedding for both video-to-text and textto-video retrieval. We report other PoS disentanglement results in supplementary material. Qualitative results. Figure 5 shows qualitative results comparing using the full caption and JPoSE noting the disentangled model's ability to commonly rank videos closer to their corresponding captions. Conclusion We have proposed a method for fine-grained action retrieval. By learning distinct embeddings for each PoS, our model is able to combine these in a principal manner and to create a space suitable for action retrieval, outperforming approaches which learn such a space through captions alone. We tested our method on a fine-grained action retrieval dataset, EPIC, using the open vocabulary labels. Our results demonstrate the ability for the method to generalise to zero-shot cases. Additionally, we show the applicability of the notion of disentangling the caption for the general video-retrieval task on MSR-VTT. Table 9. Noun retrieval task results on the seen test set of EPIC-Kitchens. Supplementary Material A. Individual Part-of-Speech Retrieval (Sec. 3 .3) In the main manuscript, we report results on the task of fine-grained action retrieval. For completion, we here present results on individual Part-of-Speech (PoS) retrieval tasks. In Table 8, we report results for fine-grained verb retrieval (i.e. only retrieve the relevant verb/action in the video). We include the standard baselines and we additionally report the results obtained by a PoS-MMEN, that is a single embedding for verbs solely. We compare this to our proposed multi-embedding JPoSE. Using JPoSE produces better (or the same) results for both cross-modal and within-modal searches. Similarly, in Table 9, we compare results for fine-grained noun retrieval (i.e. only retrieve the relevant noun/object in the video). We show similar increases in mAP over crossmodal and within-modal searches. This indicates the complementary PoS information, from the other PoS embedding as well as the PoS-aware action embedding, helps to better define the individual embedding space. B. Closed vs Open Vocabulary Embedding C. Text embedding Using RNN We provide here the results of replacing the text embedding function, g, with an RNN instead of the two layer perceptron for the MMEN method. The RNN was modelled as a Gated Recurrent Unit (GRU). Captions were capped and zero-padded to a maximum length of 15 words. Adding a layer on top of the GRU proved not to be useful. Results of the RNN in the experiments are given under the name MMEN (Caption RNN). Given the singular verb and low noun count RNNs were not tested for the individual PoS-MMENs. Cross-Modal and Within-Modal Results can be seen in Tables 11 and 12 respectively. The inclusion of the RNN sees improvements in mAP performance for tv, vv and tt compared to MMEN (caption). However, compared to MMEN ([Verb,Noun]) or JPoSE (Verb,Noun) using the entire caption still leads to worse results for both cross and within modal retrieval. D. Additional MSR-VTT Experiments (Sec. 4.2) Table 13 of this supplementary is an expanded version of Table 7 in the main paper testing a variety of different combinations for PoS. For each row, an average of 10 runs is reported. This experiment also includes the removal of the NetVLAD layer in the MMEN, substituting it with mean pooling which we label as AVG. Results show that, on their own, Determinants, Adjectives and Adpositions achieve very poor results. We also report three JPoSE disentanglement options: (Verb, Noun), (Caption\Verb, Verb) and the one in the main paper (Capiton\Noun, Noun). The table shows that the best results are achieved when nouns are disentangled from the rest of the caption. Table 13. MSR-VTT Video-Caption Retrieval results using recall@k (R@k, higher is better) and median Rank (MR, lower is better). For each row, an average of 10 runs is reported. *We include results from [22], only available for Text-to-Video retrieval.
4,495
1908.03121
2966129987
We study the simulation of stellar mergers, which requires complex simulations with high computational demands. We have developed Octo-Tiger, a finite volume grid-based hydrodynamics simulation code with Adaptive Mesh Refinement which is unique in conserving both linear and angular momentum to machine precision. To face the challenge of increasingly complex, diverse, and heterogeneous HPC systems, Octo-Tiger relies on high-level programming abstractions. We use HPX with its futurization capabilities to ensure scalability both between nodes and within, and present first results replacing MPI with libfabric achieving up to a 2.8x speedup. We extend Octo-Tiger to heterogeneous GPU-accelerated supercomputers, demonstrating node-level performance and portability. We show scalability up to full system runs on Piz Daint. For the scenario's maximum resolution, the compute-critical parts (hydrodynamics and gravity) achieve 68.1 parallel efficiency at 2048 nodes.
There are several studies that investigate the structure of mass loss in V1309 Scorpii through computer simulation. One approach to modeling this system is smoothed-particle hydrodynamics (SPH). Notable SPH applications include StarSmasher @cite_44 @cite_18 (a fork of StarCrash @cite_8 ) and an unpublished code developed by a collaboration of researchers from Princeton University, Columbia University, and Osaka University @cite_45 @cite_31 . An alternative approach is to use the finite volume method to simulate mass transfer. Examples of such applications include Athena @cite_38 @cite_57 and its rewrite named Athena++ @cite_19 @cite_9 @cite_6 . Lastly, Enzo @cite_20 is a project that implements finite volume hydrodynamics along with a collisionless N-body module that can be used to simulate binary systems where one component is taken to be a point mass. With the exception of SPH codes using direct summation for gravity, is unique among three-dimensional self-gravitating hydrodynamics codes in that it simultaneously conserves both linear and angular momentum to machine precision. SPH codes using direct summation for gravity are limited to only a few thousand particles, making the better choice for high resolution simulations.
{ "abstract": [ "A new code for astrophysical magnetohydrodynamics (MHD) is described. The code has been designed to be easily extensible for use with static and adaptive mesh refinement. It combines higher order Godunov methods with the constrained transport (CT) technique to enforce the divergence-free constraint on the magnetic field. Discretization is based on cell-centered volume averages for mass, momentum, and energy, and face-centered area averages for the magnetic field. Novel features of the algorithm include (1) a consistent framework for computing the time- and edge-averaged electric fields used by CT to evolve the magnetic field from the time- and area-averaged Godunov fluxes, (2) the extension to MHD of spatial reconstruction schemes that involve a dimensionally split time advance, and (3) the extension to MHD of two different dimensionally unsplit integration methods. Implementation of the algorithm in both C and FORTRAN95 is detailed, including strategies for parallelization using domain decomposition. Results from a test suite which includes problems in one-, two-, and three-dimensions for both hydrodynamics and MHD are given, not only to demonstrate the fidelity of the algorithms, but also to enable comparisons to other methods. The source code is freely available for download on the web.", "", "", "Luminous red novae transients, presumably from stellar coalescence, exhibit long-term precursor emission over hundreds of binary orbits leading to impulsive outbursts, with durations similar to a single orbital period. In an effort to understand these signatures, we present and analyze a hydrodynamic model of unstable mass transfer from a giant-star donor onto a more-compact accretor in a binary system. Our simulation begins with mass transfer at the Roche limit separation and traces a phase of runaway decay leading up to the plunge of the accretor within the envelope of the donor. We characterize the fluxes of mass and angular momentum through the system and show that the orbital evolution can be reconstructed from measurements of these quantities. The morphology of outflow from the binary changes significantly as the binary orbit tightens. At wide separations, a thin stream of relatively high-entropy gas trails from the outer Lagrange points. As the orbit tightens, the orbital motion desynchronizes from the donor's rotation, and low-entropy ejecta trace a broad fan of largely-ballistic trajectories. An order-of-magnitude increase in mass ejection rate accompanies the plunge of the accretor with the envelope of the donor. We argue that this transition marks the precursor-to-outburst transition observed in stellar coalescence transients.", "Recent observations have revealed that the remnants of stellar-coalescence transients are bipolar. This raises the questions of how these bipolar morphologies arise and what they teach us about the mechanisms of mass ejection during stellar mergers and common envelope phases. In this paper, we analyze hydrodynamic simulations of the lead-in to binary coalescence, a phase of unstable Roche lobe overflow that takes the binary from the Roche limit separation to the engulfment of the more-compact accretor within the envelope of the extended donor. As mass transfer runs away to increasing rates, gas trails away from the binary. Contrary to previous expectations, early mass loss remains bound to the binary and forms a circumbinary torus. Later ejecta, generated as the accretor grazes the surface of the donor, have very different morphology and are unbound. These two components of mass loss from the binary interact as later, higher-velocity ejecta collide with the circumbinary torus formed by earlier mass loss. Unbound ejecta are redirected toward the poles and escaping material creates a bipolar outflow. Our findings show that the transition from bound to unbound ejecta from coalescing binaries can explain the bipolar nature of their remnants, with implications for our understanding of the origin of bipolar remnants of stellar coalescence transients and, perhaps, some pre-planetary nebulae.", "", "", "", "We study transients produced by equatorial disk-like outflows from catastrophically mass-losing binary stars with an asymptotic velocity and energy deposition rate near the inner edge which are proportional to the binary escape velocity v_esc. As a test case, we present the first smoothed-particle radiation-hydrodynamics calculations of the mass loss from the outer Lagrange point with realistic equation of state and opacities. The resulting spiral stream becomes unbound for binary mass ratios 0.06 < q < 0.8. For synchronous binaries with non-degenerate components, the spiral-stream arms merge at a radius of 10a, where a is the binary semi-major axis, and the accompanying shock thermalizes about 10 of the kinetic power of the outflow. The mass-losing binary outflows produce luminosities reaching up to 10^6 L_Sun and effective temperatures spanning 500 < T_eff < 6000 K, which is compatible with many of the class of recently-discovered red transients such as V838 Mon and V1309 Sco. Dust readily forms in the outflow, potentially in a catastrophic global cooling transition. The appearance of the transient is viewing angle-dependent due to vastly different optical depths parallel and perpendicular to the binary plane. We predict a correlation between the peak luminosity and the outflow velocity, which is roughly obeyed by the known red transients. Outflows from mass-losing binaries can produce luminous (10^5 L_Sun) and cool (T_eff < 1500 K) transients lasting a year or longer, as has potentially been detected by Spitzer surveys of nearby galaxies.", "Binary stars commonly pass through phases of direct interaction which result in the rapid loss of mass, energy, and angular momentum. Though crucial to understanding the fates of these systems, including their potential as gravitational wave sources, this short-lived phase is poorly understood and has thus far been unambiguously observed in only a single event, V1309 Sco. Here we show that the complex and previously-unexplained photometric behavior of V1309 Sco prior to its main outburst results naturally from the runaway loss of mass and angular momentum from the outer Lagrange point, which lasts for thousands of orbits prior to the final dynamical coalescence, much longer than predicted by contemporary models. This process enshrouds the binary in a \"death spiral\" outflow, which affects the amplitude and phase modulation of its light curve, and contributes to driving the system together. The total amount of mass lost during this gradual phase ( @math ) rivals the mass lost during the subsequent dynamical interaction phase, which has been the main focus of \"common envelope\" modeling so far. Analogous features in related transients suggest that this behavior is ubiquitous.", "This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and met al-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology." ], "cite_N": [ "@cite_38", "@cite_18", "@cite_8", "@cite_9", "@cite_6", "@cite_44", "@cite_57", "@cite_19", "@cite_45", "@cite_31", "@cite_20" ], "mid": [ "2116091257", "", "", "2792351952", "2886880539", "", "", "", "2196561007", "2763929764", "2102852529" ] }
From Piz Daint to the Stars: Simulation of Stellar Mergers using High-Level Abstractions
Astrophysical simulations are among the classical drivers for exascale computing. They require multiple scales of physics and cover vast scales in space and time. Even the next generation of high-performance computing (HPC) systems will be insufficient to solve more than a fraction of the many conceivable scenarios. However, new HPC systems come not only with ever larger processor counts, but increasingly complex, diverse, and heterogeneous hardware. Evolving manycore architectures and GPUs are combined with multicore systems. This raises challenges especially for large-scale HPC simulation codes and requires going beyond traditional programming models. High-level abstractions are required to ensure that codes are portable and can be run on current HPC systems without the need to rewrite large portions of the code. We consider the simulation of stellar phenomena based on the simulation framework Octo-Tiger. In particular, we study the simulation of time-evolving stellar mergers (Fig. 1). The study of binary star evolution from the onset of mass transfer to merger can provide fundamental insight into the underlying physics. In 2008, this phenomenon was observed with photometric data, when the contact binary V1309 Scorpii merged to form a luminous red novae [58]. The vision of our work is to model this event with simulations on HPC systems. Comparing the results of our simulations with the observations will enable us to validate the model and to improve our understanding of the physical processes involved. Octo-Tiger is an HPC application and relies on high-level abstractions, in particular, HPX and Vc. While HPX provides scheduling and scalability, both between nodes and within, Vc ensures portable vectorization across processorbased platforms. To make use of GPUs we use HPX's CUDA integration in this work. Previous work has demonstrated scalability on Cori, a Cray XC40 system installed at the National Energy Research Scientific Computing Center (NERSC) [27]. However, the underlying node-level performance was rather low, and they were only able to simulate for few time steps. Consequently, they had started to study node-level performance, achieving 408 GFLOPS on the 64 cores of the Intel Knights Landing manycore processor [45]. Using the same high-level abstractions as on multicore systems, this led to a speedup of 2 compared to a 24-core Intel Skylake-SP platform. In this work, we make use of the same CPU level abstraction library Vc [31] for SIMD vector parallelism as in the previous study, but extend Octo-Tiger to support GPU-based HPC machines. We show how the critical node-level bottleneck, the fast multipole method (FMM) kernels, can be mapped to GPUs. Our approach utilizes GPUs as co-processors, running up to 128 FMM kernels on each one simultaneously. This was implemented using CUDA streams and uses HPX's futurization approach for lock-free, low-overhead scheduling. We demonstrate the performance-portability of Octo-Tiger for a set of GPU and processor-based HPC nodes. To scale more efficiently to thousands of nodes, we have integrated a new libfabric communication backend into HPX where it can be used transparently by Octo-Tiger -the first large scientific application to use the new network layer. The libfabric implementation extensively uses one-sided communication to reduce the overhead compared to a standard two-sided MPI-based backend. To demonstrate both our node-level GPU capabilities as well as our improved scalability with libfabric, we show results for full-scale runs on Piz Daint running the real-world stellar merger scenario of V1309 Scorpii for a few time-steps. Piz Daint is a Cray XC40/XC50 equipped with NVIDIA's P100 GPUs at the Swiss National Supercomputing Centre (CSCS). For our full system runs we used up to 5400 out of 5704 nodes. This is the first time an HPX application was run on a full system of a GPU-accelerated supercomputer. In Sec. 2 we briefly discuss related approaches. We describe the stellar scenario in more detail in Sec. 3, the important parts of the overall software framework and the high-level abstractions they provide in Sec. 4. In turn, Sec. 5 shows the main contributions of this work, describing both the new libfabric parcelport and the way we utilize GPUs to accelerate the execution of critical kernels. In Sec. 6.1, we present our node-level performance results for NVIDIA GPUs, Intel Xeons and an Intel Xeon Phi platform. Section 6.2 describes our scaling results, showing that we are able to scale with both an MPI communication backend and a libfabric communication backend of HPX. We show that the use of libfabric strongly improves performance at scale. SCENARIO: STELLAR MERGERS In September 2008, the contact binary, V1309 Scorpii, merged to form a luminous red novae (LRN) [58]. The Optical Gravitational Lensing Experiment (OGLE) observed this binary prior to its merger, and six years of data show its period decreasing. When the merger occurred, the system increased in brightness by a factor of about 5000. Mason et al. [39] observed the outburst spectroscopically, confirming it as a LRN. This was the first observed stellar merger of a contact binary with photometric data available prior to its merger. Possible progenitor systems for V1309 Scorpii, consisting initially of zero-age main sequence stars with unequal masses in a relatively narrow range, were proposed by Stepien in [50]. As the heavier of the two stars first begins to expand into a red giant, it transfers mass to its lower mass companion, forming a common envelope. The binary's orbit shrinks due to friction, and the mass transfer slows down as the companion becomes the heavier of the two stars but continues to grow at the expense of the first star. Eventually this star also expands, with both stars now touching each other forming a contact binary. Stepien et. al. sampled the space of physically possible initial masses, finding that initial primary masses of between 1.1 ⊙ and 1.3 ⊙ and initial secondary masses between 0.5 ⊙ and 0.9 ⊙ produced results consistent with observations prior to merger. The evolution described above results in an approximately 1.52 − 1.54 ⊙ primary and a 0.16 − 0.17 ⊙ secondary with helium cores and Sun-like atmospheres. It is theorized that the merger itself was due to the Darwin instability. When the total spin angular momentum of a binary system exceeds one third of its orbital angular momentum, the system can no longer maintain tidal synchronization. This results in a rapid tidal disruption and merger. Octo-Tiger uses its Self-Consistent Field module [20,23] to produce an initial model for V1309 to simulate this last phase of dynamical evolution. The stars are tidally synchronized, and the stars have a common atmosphere. The system parameters are chosen such that the spin angular momentum just barely exceeds one third of the orbital angular momentum. Octo-Tiger begins the simulation just as the Darwin instability sets in (Fig. 1). SOFTWARE FRAMEWORK 4.1 HPX We have developed the Octo-Tiger application framework [52] in ISO C++11 using HPX [24-26, 28, 29, 51]. HPX is a C++ standard library for distributed and parallel programming built on top of an Asynchronous Many Task (AMT) runtime system. Such AMT runtimes may provide a means for helping programming models to fully exploit available parallelism on complex emerging HPC architectures. The HPX methodology described here includes the following essential components: • An ISO C++ standard conforming API that enables waitfree asynchronous parallel programming, including futures, channels, and other primitives for asynchronous execution. • An Active Global Address Space (AGAS) that supports load balancing via object migration and enables exposing a uniform API for local and remote execution. • An active-message networking layer that enables running functions close to the objects they operate on. This also implicitly overlaps computation and communication. • A work-stealing lightweight task scheduler that enables finer-grained parallelization and synchronization and automatic load balancing across all local compute resources. • APEX, an in-situ profiling and adaptive tuning framework. The design features of HPX allow application developers to naturally use key parallelization and optimization techniques, such as overlapping communication and computation, decentralizing control flow, oversubscribing execution resources, and sending work to data instead of data to work. As a result Octo-Tiger achieves exceptionally high system utilization and exposes very good weak-and strong scaling behaviour. HPX exposes an asynchronous, standards conforming programming model enabling Futurization, with which developers can express complex dataflow execution trees that generate billions of HPX tasks that are scheduled to execute only when their dependencies are satisfied [27]. Also, Futurization enables automatic parallelization and load-balancing to emerge. Additionally, HPX provides a performance counter and adaptive tuning framework that allows users to access performance data, such as core utilization, task overheads, and network throughput; these diagnostic tools were instrumental in scaling Octo-Tiger to the full machine. This paper demonstrates the viability of the HPX programming model at scale using Octo-Tiger, a portable and standards conforming application. Octo-Tiger fully embraces the C++ Parallel Programming Model, including additional constructs that are incrementally being adopted into the ISO C++ Standard. The programming model views the entire supercomputer as a single C++ abstract machine. A set of tasks operates on a set of C++ objects distributed across the system. These objects interact via asynchronous function calls; a function call to an object on a remote node is relayed as an active message to that node. A powerful and composable primitive, the future object represents and manages asynchronous execution and dataflow. A crucial property of this model is the semantic and syntactic equivalence of local and remote operations. This provides a unified approach to intra-and inter-node parallelism based on proven generic algorithms and data structures available in today's ISO C++ Standard. The programming model is intuitive and enables performance portability across a broad spectrum of increasingly diverse HPC hardware. Octo-Tiger Octo-Tiger simulates the evolution of mass density, momentum, and energy of interacting binary stellar systems from the start of mass transfer to merger. It also evolves five passive scalars. It is a three-dimensional finite-volume code with Newtonian gravity that simulates binary star systems as selfgravitating compressible inviscid fluids. To simulate these fluids we need three core components: (1) a hydrodynamics solver, (2) a gravity solver that calculates the gravitational field produced by the fluid distribution, and (3) a solver to generate an initial configuration of the star system. The passive scalars, expressed in units of mass density, are evolved using the same continuity equation that describes the evolution of the mass density. They do not influence the flow itself, but are rather used to track various fluid fractions as the system evolves. In the case of V1309, these scalars are initialized to the mass density of the accretor core, the accretor envelope, the donor core, the donor envelope, and the common atmosphere between the two stars. The passive scalars are useful in post-processing. For instance, to compute the temperature we require the mass and energy densities as well as the number density. The latter is not evolved in the simulation, but can be computed from the passive scalars assuming a composition for each fraction (e.g. helium for both cores, and a solar composition for the remaining fractions). The balance of angular momentum plays an important role in the orbital evolution of binary systems. Three-dimensional astrophysical fluid codes with self-gravity do not typically conserve angular momentum. The magnitude of this violation is dependent on the particular problem and resolution. Previous works have found relative violations as high as 10 −3 per orbit [16,38,41]. This error, accumulated over several dozen orbits, becomes significant enough to influence the fate of the system. Octo-Tiger conserves both linear and angular momenta to machine precision. In the fluid solver, this is accomplished using a technique described by [18], while the gravity solver uses our own extension to the FMM. Octo-Tiger's main datastructure is a rotating Cartesian grid with adaptive mesh refinement (AMR). It is based on an adaptive octree structure. Each node is an 3 sub-grid (with = 8 for all runs in this paper) containing the evolved variables, and can be further refined into eight child nodes. Each octree node is implemented as an HPX component. These octree nodes are distributed onto the compute nodes using a space filling curve. For further information about implementational details we refer to [45] and [37]. The first solver that operates on this octree is a finite volume hydrodynamics solver. Octo-Tiger uses the central advection scheme of [32]. The piece-wise parabolic method (PPM) [13] is used to compute the thermodynamic variables at cell faces. A method detailed by [38] is used to conserve total energy in its interaction with the gravitational field. This technique involves applying the advection scheme to the sum of gas kinetic, internal, and potential energies, resulting in conservation of the total energy. Numerical precision of internal energy densities can suffer greatly in high mach flows, where the kinetic energy dwarfs the gas internal energy. We use the dual-energy formalism of [10] to overcome this issue: We evolve both the gas total energy as well as the entropy. The internal energy is then computed from one or the other depending on the mach number (entropy for high mach flows and total gas energy for low mach ones). The angular momentum technique described by [18] is applied to the PPM reconstruction. It ads a degree of freedom to the reconstruction of velocities on cell faces by allowing for the addition of a spatially constant angular velocity component to the linear velocities. This component is determined by evolving three additional variables corresponding to the spin angular momentum for a given cell. The gravitational field solver is based on the FMM. Octo-Tiger is unique in conserving both linear and angular momentum simultaneously and at scale using modifications to the original FMM algorithm [36,37]. Finally, we assemble the initial scenario using the Self-Consistent Field technique alongside the FMM solver. Octo-Tiger can produce initial models for binary systems that are in contact, semi-detached, or detached [37]. Calculated only once, the computational demands of this solver will be negligible for full-size runs. We used a test suite of four verification tests, recommended by Tasker et al. [56] for self-gravitating astrophysical codes, to verify the correctness of our results. The first two are purely hydrodynamic tests: the Sod shock tube and the Sedov-Taylor blast wave. Both have analytical solutions which we can use for comparisons. The third and fourth test are a globular star cluster in equilibrium and one in motion. In each case, the equilibrium structure should be retained. Because Octo-Tiger is intended to simulate individual stars self-consistently, we have substituted a single star in equilibrium at rest for the third test and a single star in equilibrium in motion for the fourth test. The FMM hotspot The most compute-intensive task is the calculation of the gravitational field using the FMM, since this has to be done for each of the fluid-solver time-steps. Note that our FMM variant differs from approaches such as the implementation used in [61]. While being distributed and GPU-capable, their FMM is operating upon particles. Our FMM variant operates on the grid cells directly since each grid cell has a density value which determines its mass, and thus its gravitational influence on other cells. We further differ from other (cellbased) FMM variants used for computing gravitational fields by conserving not only linear momentum, but also angular momentum, down to machine precision using the changes outlined in [36]. Due to its computational intensity, we will take a closer look at the FMM and its kernels in this section. The FMM algorithm consists of three steps. First, it computes the multipole moments and the center-of-masses of the individual cells. This information is then used to calculate Taylor-series expansions coefficients in the second and third steps. These coefficients can in turn be used to approximate the gravitational potential in a cell, which can then be used by the hydrodynamics solver [37]. The first of the three FMM steps requires a bottom up traversal of the octree datastructure. The fluid density of the cells of the highest level is the starting point. The multipole moments of every other cell are then calculated using the multipole moments of its child cells. We can additionally compute the center of mass for each refined cell. While this step includes a tree-traversal, it is not very compute intensive. In the second FMM step (same-level), we use the multipole moments and the center-of-masses to calculate how much the gravity in each cell is influenced by its neighboring cells on the same octree level. How many cells are considered as "neighboring" is determined by the so-called opening criteria [37]. However, their number is constant on each level. The result of these interactions is a Taylor series expansion of interactions. This is the most compute-intensive part. In the third FMM step, the gravitational influence of cells outside of the opening criteria is computed, and the octree is traversed top-down. The respective Taylor series expansion of the parent node is passed to the child nodes and accumulated. In the first and third step we calculate interactions between either child nodes and their respective parents or vice-versa. Since a refined node only has 8 children, the number of these interactions is limited. In the second step, the number of same-level interactions per cell that need to be calculated is much higher. For our choice of parameters, each cell interacts with 1074 of its close neighbors, assuming they exist. The second FMM step (same-level interactions) is by far the most compute-intense part. Originally, it required about 70% of the total scenario runtime and was thus the core focus of previous optimizations. Originally, lookup of close neighbor cells was performed using an interaction list, and data was stored in an array-of-struct format. In order to improve cache-efficiency and vector-unit usage, we changed it to a stencil-based approach and are now utilizing a structof-arrays datastructure. Compared to the old interactionlist approach, this led to a speedup of the total application runtime between 1.90 and 2.22 on AVX512 CPUs and between 1.23 and 1.35 on AVX2 CPUs [15]. Furthermore, we achieved node-level scaling as well as performance portability between different CPU architectures through the usage of Vc [15,45]. After these optimizations, the FMM required only about 40% (depending on the hardware) of the total scenario runtime with its compute kernels reaching a significant fraction of peak on multiple platforms as we will demonstrate in Sect. 6.1. Due to the presence of AMR, there are four different cases of same-level interactions: 1) multipole-monopole interactions between cells of a refined octree node (multipoles) and cells of a non-refined octree node (monopoles); 2) multipolemultipole interactions; 3) monopole-monopole interactions; and 4) monopole-multipole interactions. This yields four kernels per octree-node. Their input data are the current node's sub-grid as well as all sub-grids of all neighboring nodes as a halo (ghost layer). The kernels then compute all interactions of a certain type and add the result to the Taylor coefficients of the respective cells in the sub-grid. We were able to combine the multipole-multipole and the multipole-monopole kernels into a single kernel, yielding three compute kernels in our implementation. As the monopole-multipole kernel consumes only about 2% of the total runtime, we ignore it in the following. The remaining two compute kernels, 1)/2) and 3), are the central hotspots of the application. Each kernel launch applies a 1074 element stencil for each cell of the octree's sub-grid. As we have 3 = 512 cells per sub-grid, this results in 549 888 interactions per kernel launch. Depending on the interaction type, each of those interactions requires a different number of floating point operations to be executed. For monopole-monopole interactions we execute 12 floating point operations per interaction, and for multipole-multipole/monopole interaction 455 floating point operations. More information about the kernels can be found in [45]; however, the number of floating operations per monopole interaction differs slightly there as we combined the two monopole-X kernels there. IMPROVING OCTO-TIGER USING HIGH-LEVEL ABSTRACTIONS Running an irregular, adaptive application like Octo-Tiger on a heterogeneous supercomputer like Piz Daint presents challenges: The pockets of parallelism contained in each octree node must be run efficiently on the GPU, despite the relatively small number of cells in each sub-grid. The GPU implementation should not degrade parallel efficiency through overheads such as work aggregation, CPU/GPU synchronization, or blocked CPU threads. Furthermore, we expect the implementation to behave as before, with the exception of faster GPU execution of tasks. In this section, we first present our implementation and integration of FMM GPU kernels into the task flow using HPX CUDA futures as a high-level abstraction. We then introduce the libfabric parcelport and show how this new communication layer improves scalability of Octo-Tiger by taking advantage of HPX's communications abstractions. Asynchronous Many Tasks with GPUs As our FMM implementation is stencil-based and uses a struct-of-arrays datastructure, the FMM kernels introduced in Section 4.3 are very amenable to GPU execution. Each kernel executes a 1074 element stencil on the 512 cells of the 8x8x8 sub-grid of an octree node, calculating the gravitational interactions of each cell with its 1074 neighbors. We parallelize over the cells of the sub-grid, launching kernels with 8 blocks, each containing 64 CUDA threads which execute the whole stencil for each cell. The stencil-based computation of the interactions between two cells is done the same way as on the CPU. In fact, since we use Vc datatypes for vectorization on the CPU, we can simply instance the same function template (that computes the interaction between two cells) with scalar datatypes and call it within the GPU kernel. GPU-specific optimizations are done in a wrapper around this cell-to-cell method and the loop over the stencil elements. This wrapper includes the usual CUDA optimizations such as shared and constant memory usage. Thus far we have used standard CUDA to create rather normal kernels for the FMM implementation. However, these kernels alone suffer from two major issues: As it stands, the execution of a GPU kernel would block the CPU thread launching it, no other task would be scheduled or executed whilst it runs. As Octo-Tiger relies on having thousands of tasks available simultaneously for scalability, this presents a problem. The second issue is obvious when looking at the size of the workgroups and the number of blocks for each GPU kernel launch mentioned above. The GPU kernels do not expose enough parallelism to fully utilize a GPU such as the NVIDIA P100 using only small workgroups and 8 blocks per kernel. To solve these two issues, we provide an HPX abstraction for CUDA streams. For any CUDA stream event we create an HPX future that becomes ready once operations in the stream (up to the point of the event/future's creation) are finished. Internally, this is created using a CUDA callback function that sets the future ready [24]. This seemingly simple construct allows us to fully integrate CUDA kernels within the HPX runtime, as it provides a synchronization point for the CUDA stream that is compatible with the HPX scheduler. It yields multiple immediate advantages: • Seamless and automatic execution of kernels and overlapping of CPU/GPU tasks; • overlapping of computation and communication as some HPX tasks are related to the communication with other compute nodes; and • CPU/GPU data synchronization -completed GPU kernels triggering the scheduler, signal access to buffers that can be used/copied. Furthermore, the integration is mostly non-invasive since a CUDA kernel invocation now equates to a function call returning a future. The rest of the kernel implementation and the (asynchronous) buffer handling uses the normal CUDA API, thus the GPU kernels themselves can still be hand-optimized. Nonetheless, this integration alone does not solve the second issue: The kernels are too fine-grained to fully utilize the GPUs. Conventional approaches to solve this include work aggregation and execution models where CUDA kernels can call other kernels and coalesce execution. Unfortunately, work aggregation schemes, as described in [42], do not fit our task-based approach. Individual kernels should finish as soon as possible in order to trigger dependent ones, such as communication with other nodes or the third FMM step; delays in launching these may lead to a degradation of parallel efficiency. Recursively calling other GPU kernels as in [59] poses a similar problem as we would traverse the octree on the GPU, making communication calls more difficult. Furthermore, we would like to run code on the appropriate device; tree traversals on the CPU, and processing of the octree kernels on the GPU. Here, however, we can exploit the fact that the execution of GPU kernels is just another task to the HPX runtime system: We launch a multitude of different GPU kernels concurrently on different streams with each CPU thread handling multiple CUDA streams, and thus multiple GPU kernels concurrently. Normally, this would present problems for CPU/GPU synchronization as GPU results are needed for other CPU tasks. But the continuation passing style of program execution in HPX, chaining dependent tasks onto futures, makes this trivial. When a GPU kernel output (or data transfer) that has not yet finished is needed for a task, the runtime assigns different work to the CPU and schedules the dependent tasks when the GPU future becomes ready. When the number of concurrent GPU tasks running matches the total number of available CUDA streams (usually 128 per GPU), new kernels are instead executed as CPU tasks until a CUDA stream becomes empty again. In summary, the octree is traversed on the CPU, with tasks spawned asynchronously for kernels on the GPU or CPU returning futures for each. Any tasks that require results from previous ones are attached as continuations to the futures. The CPU is continuously supplied with new work (including communication tasks) as futures complete. Since all CPU threads may participate in traversal and steal work from each other, we keep the GPU busy by nature of the sheer number of concurrent GPU kernels submitted. Octo-Tiger is the first application to use HPX CUDA futures. It is in fact an ideal fit for this kind of GPU integration: Parallelization is possible only within individual timesteps of the application, and a production run simulation will require tens of thousands of them, making it is essential to maximize parallel efficiency (as well as proper GPU usage), particularly as each timestep might run for a fraction of a second on the whole machine overall. The fine-grained approach of GPU usage presented here fits these challenges perfectly. In Section 6 we show how this model performs. We run a real-world scenario for a few timesteps to both show that we achieve a significant fraction of GPU peak performance during the execution of the FMM, as well as scalability on the whole Piz Daint machine, each of the 5400 compute nodes using a NVIDIA P100 GPU. Thus, Octo-Tiger also serves as a proof as concept, showing that large, tree-based applications containing pockets of parallelism can efficiently run finegrained parallelism tasks on the GPU without compromising scalability with HPX. Active messages and libfabric parcelport The programming model of HPX does not rely on the user matching network sends and receives explicitly as one would do with MPI. Instead, active messages are used to transfer data and trigger a function on a remote node; we refer to the triggering of remote functions with bound arguments as actions and the messages containing the serialized data and remote function as parcels [7]. A halo exchange, for example, written using MPI involves a receive operation posted on one node and a matching send on another. With non-blocking MPI operations, the user may check for readiness of the received data at a convenient place in the code and then act appropriately. With blocking ones, the user must wait for the received data and can only continue as soon as it arrives. With HPX, the same halo exchange may be accomplished by creating a future for some data on the receiving end, and having the sending end trigger an action that sets the future ready with the contents of the parcel data. Since futures in HPX are the basic synchronization primitive for work, the user may attach a continuation to the receive data to start the next calculation that depends on it. The user does not therefore have to perform any test for readiness of the received data: When it arrives, the runtime will set the future and schedule whatever work depends upon it automatically. This combines the convenience of both a blocking receive to trigger work, with an asynchronous receive that allows the runtime to continue whilst waiting. The asynchronous send/receive abstraction in HPX has been extended with the concept of a channel that the receiving end may fetch futures from (for timesteps ahead if desired) and the sending end may push data into as it is generated. Channels are set up by the user similar to MPI communicators; however, the handles to channels are managed by AGAS (Sect. 4.1). Even when a grid cell is migrated from one node to another during operation, the runtime manages the updated destination address transparently, allowing the user code to send data to the relocated grid with minimal disruption. These abstractions greatly simplify user level code and allow performance improvements in the runtime to be propagated seamlessly to all places that use them. The default messaging layer in HPX is built on top of the asynchronous two-sided MPI API and uses Isend/Irecv within the parcel encoding and decoding steps of action transmission and execution. HPX is designed from the ground up to be multi-threaded, avoid locking/waiting, and instead suspend tasks and execute others as soon as any blocking activity takes place. Although MPI supports multi-threaded applications, it has its own internal progress/scheduling management and locking mechanisms that interfere with the smooth running of the HPX runtime. The scheduling in MPI is in turn built upon the network provider's asynchronous completion queue handling and multi-threaded support which may also use OS level locks that suspend threads (and thus impede HPX progress). The HPX parcel format is more complex than a simple MPI message, but the overheads of packing data can be kept to a minimum [7] by using remote memory access (RMA) for transfers. All user/packed data buffers larger than the eager message size threshold are encoded as pointers and exchanged between nodes using one-sided RMA put/get operations. Switching HPX to use the one-sided MPI RMA API is no solution as this involves memory registration/pinning that is passed through to the provider level API, causing additional (unwanted) synchronization between user code, MPI code, and the underlying network/fabric driver. Bypassing MPI and using the network API directly to improve performance was seen as a way of decreasing latency, improving memory management, simplifying the parcelport code, and better integrating the multi-threaded runtime with the communications layer. Libfabric was chosen as it has an ideal API that is supported on many platforms, including Cray machines via the GNI provider [46]. The purely asynchronous API of libfabric blends seamlessly with the asynchronous internals of HPX. Any task scheduling thread may poll for completions in libfabric and set futures to received data without any intervening layer. A one-to-one mapping of completion events to ready futures is possible for some actions, and dependencies for those futures can be immediately scheduled for execution. We expose pinned memory buffers for RMA to libfabric via allocators in the HPX runtime, so that internal data copying between user buffers (halos for example) and the network is minimized. When dealing with GPUs capable of multi TFlop performance, even delays of the order of microseconds in receiving data and subsequent task launches translates to a significant loss of compute capability. Note that with the HPX API it is trivial to reserve cores for thread pools dedicated to background processing of the network separate from normal task execution to further improve performance, but this has not yet been attempted with the Octo-Tiger code. Our libfabric parcelport uses only a small subset of the libfabric API but delivers very high performance as we demonstrate in Sect. 6 MPI. Similar gains could probably be made using the MPI RMA API, but this would require a much more complex implementation. It is a significant contribution of this work that we have demonstrated that an application may benefit from significant performance improvements in the runtime without changing a single line of the application code. This has been achieved utilizing abstractions for task management, scheduling, distribution, and messaging. It is generally true of any library that improvements in performance will produce corresponding improvements in code using it. But switching a large codebase to one-sided or asynchronous messaging is usually a major operation that involves redesigns of significant portions to handle synchronization between previously isolated (or sequential) sections. The unified futurized and asynchronous API of HPX provides a unique opportunity to take advantage of improvements at all levels of parallelism throughout a code as all tasks are naturally overlapped. And network bandwidth and latency improvements reduce waiting not only for remote data, but the effects of improved scheduling of all messages (synchronization of remote tasks as as well as direct data movement) directly impacts and improves on-node scheduling and thus benefits all tasks. RESULTS The initial model of our V1309 simulation includes a 1.54 ⊙ primary and a 0.17 ⊙ secondary. Each have helium cores and solar composition envelopes, and there is a common envelope surrounding both stars. The simulation domain is a cubic grid with edges 1.02 × 10 3 ⊙ long. This is about 160 times larger than the initial orbital separation, providing space for any mass ejected from the system. The sub-grids are 8 × 8 × 8 grid cells. The centers of mass of the components are 6.37 ⊙ apart. The grid is rotating about the z-axis with a period of 1.42 days, corresponding to the initial period of the binary. For the level 14 run, both stars are refined down to 12 levels, with the core of the accretor and donor refined to 13 and 14 levels respectively. The 15, 16, and 17 level runs are successively refined one more level in each refinement regime. At the finest level, each grid cell is 7.80 × 10 −3 ⊙ in each dimension for level 14, down to 9.750 × 10 −4 ⊙ for level 17. Although available compute time allowed us only to simulate a few time-steps for this work, this is exactly the production scenario we aim for. For all obtained results, the software dependencies in Table 1 were used to build Octo-Tiger (d6ad085) on the various platforms. FMM Node-Level Performance In the following, we will take a closer look at the performance of the FMM kernels, discussed in Sect. 4.3 and 5.1, on both GPUs and different CPU platforms. We will first explain how we made measurements and then discuss the results. 6.1.1 Measuring the Node-Level Performance. Measuring the node-level results for the FMM solver alone presents several challenges. Instead of a few large kernels, we are executing millions of small FMM kernels overall. Additionally, one FMM kernel alone will never utilize the complete device. On the CPU, each FMM kernel is executed by just one core. We cannot assume that the other cores will always be busy executing an FMM kernel as well. On the GPU, one kernel will utilize only up to 8 Streaming Multiprocessors (SM). The NVIDIA P100 GPU contains 56 of these SMs, each of which is analogue to a SIMD-enabled processor core. In order to see how well we utilize the given hardware with the FMM kernels, we focus not on the performance of a single kernel. We rather focus on the overall performance while computing the gravity during the GPU-accelerated FMM part of the code. In order to calculate both the GFLOP/s and the fraction of peak performance, we need to know the number of floating point operations executed while calculating the gravity, as well as the time required to do so. The first piece of information is easy to collect. Each FMM kernel always executes a constant number of floating point operations. We count the number of kernel launches in each HPX thread and accumulate this number until the end of the simulation. We can further record whether a kernel was executed on the CPU or the GPU. Due to the interleaving of kernels and the general lack of synchronization points between the gravity solver and the fluid solver, the amount of runtime spent in the FMM solver is more difficult to obtain. To measure it, we run the simulation multiple times; first, on the CPU without any GPUs. We collect profiling data with perf to get an estimation of the fraction of the runtime spent within the FMM kernels and thus the gravity solver. With this information we calculate the fraction of the runtime spent outside the gravity solver. Afterwards we repeat the run -without perf -and multiply its total runtime with the earlier obtained runtime fractions to get both the time spent in the gravity solver and the time spent in other methods. With this information, as well as the counters for the FMM kernel launches, we can now calculate the GFLOP/s achieved by the CPU when executing the FMM kernels. To get the same information for the GPUs, we include them in a third run of the same simulation. Using the GPUs, only the runtime of the gravity solver will improve since the rest of the code does not benefit from them. Thus, by subtracting the runtime spent outside of the FMM kernels in the CPU-only run from the total runtime of the third run, we can estimate the overall runtime of the GPU-enabled FMM kernels and with that the GFLOP/s we achieve overall during their execution. For all results in this work, we employ the same V1309 scenario and double precision calculations. The level 14 octree discretization considered here will serve as the baseline for scaling runs. Results. The results of our node-level runs can be found in Tab. 2. Switching to a stencil-based approach for the FMM instead of the old interaction-lists, the fraction of time spent in the two main FMM kernels shrank considerably. On the Intel Xeon E5-2660 v3 with 20 cores, they now only make up 38% of the total runtime. On the Intel Xeon Phi 7210 this difference is even higher, with the FMM only making up 20% of the total runtime. This is most likely due to the fact that the other less optimized parts of Octo-Tiger make fewer use of the SIMD capabilites that the Xeon Phi offers and are thus running a lot slower. This reduces the overall fraction of the FMM runtime compared to the rest of the code. Nevertheless, we achieve a significant fraction of peak performance on all devices. On the CPU-side, the Xeon Phi 7210 achieves the most GFLOP/s within the FMM kernels. Since it lowers its frequency to 1.1 GHz during AVX-intensive calculations, the real achieved fraction of peak performance may be significantly higher than 17%. We have assumed the base (unthrottled) clock rate shown in the table for calculating the theoretical peak performance of the CPU devices. Other than running a specific Vc version that supports AVX512 on Xeon Phi, we did not adapt the code. However, we attain a reasonable fraction of peak performance on this difficult hardware. On the AVX2 CPUs we reach about 30%. We tested GPU performance of the FMM kernels in multiple hardware configurations; we used either 10 or 20 cores in combination with either one or two V100 GPUs. Using two V100 GPUs, an insufficient number of cores affects performance. With 20 cores and two GPUs we achieve 37% of the combined V100 peak performance. Reducing to 10 cores, the performance drops to 22% of the peak. Then, the GPUs get starved of work, since the 10 cores have a lot of tasks to work on and cannot launch enough kernels on the GPU. Simultaneously, when utilizing one V100 GPU managed by 10 cores, we achieve 32% of peak performance on the GPU. But using one V100 with 20 CPU cores, the performance decreases, achieving only 22% peak: The number of threads used to fill the CUDA streams of the GPU directly affects the performance. This effect can be explained by the way we handle CUDA streams. Each CPU thread manages a certain number of CUDA streams. When launching a kernel, a thread first checks whether all of the CUDA streams it manages are busy. If not, the kernel will be launched on the GPU using an idle stream. Otherwise, the kernel will be executed on the CPU by the current CPU worker thread. Executing an FMM kernel on the CPU takes significantly longer than on the GPU, as one CPU kernel will be executed on one core. In a CPU-only setting all cores are working on FMM kernels of different octree nodes. With 20 cores and one V100, the CPU threads first fill all 128 streams with 128 kernel launches. Launching the next kernels, the GPU has not finished yet, and the CPU threads start to work on FMM kernels themselves. This leads to starvation of the GPU for a short period of time, as the CPU threads are not launching more work on the GPU in the meantime. Having two V100 offsets the problem, as the cores are less likely to work on the FMM themselves: It is more likely that there is a free CUDA stream available. We analyzed the number of kernels launched on the GPU to provide further data on this. Using 20 cores and one V100 we launch 97.4995% of all multipole-multipole FMM kernels on the GPU. Using 10 cores and one V100 this number increases to 99.9997%. Considering that a CPU FMM execution on one core takes longer than on the GPU and that during this time no other GPU kernels are launched in the meantime, the small difference in percentage can cause a large performance impact. This is a current limitation of our implementation and will be addressed in the next version of Octo-Tiger: There is no reason not to launch multiple FMM kernels in one stream if there is no empty stream available. This would lead to 100% of the FMM kernels launched on the GPU independent of the CPU hardware. Table 4: Number of tree nodes (sub-grids) per level of refinement (LoR) and the memory usage of the corresponding level. Since Piz Daint is our target system, we also evaluated performance on one of its nodes, using 128 CUDA streams. For comparison, 99.5207% of all multipole-multipole FMM kernels were launched on the GPU. We achieve about 21% of peak performance on the GPU. In summary, we were able to demonstrate that the uncommon approach of launching many small kernels is a valid way to utilize the GPU. Scaling results All of the presented distributed scaling results were obtained on Piz Daint at the Swiss National Supercomputing Centre. Table 3 lists the hardware configuration of Piz Daint. For the scalability analysis of Octo-Tiger different levels of refinement of the V1309 scenario were run, as shown in Tab. 4. A level 13 restart file, which takes less than an hour to generate on an Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz, was used as the basis for all runs. For all levels the restart file for level 13 was read and refined to higher levels of resolution through conservative interpolation of the evolved variables. The number of nodes was increased in powers of two 1, 2, 4, . . . up to 4096 nodes with a maximum of 5400 which corresponds to the full system on Piz Daint. All runs utilized 12 CPU cores on each node, i.e. up to 64, 800 cores for the full-system run. The simulations started at level 14, the smallest that fits on a single Piz Daint node with respect to memory while still consisting of an acceptable number of sub-grids to expose sufficient parallelism. The number of nodes was increased by a power of two until the scaling saturated due to too little work per node. Higher refinement levels were then run on the largest overlapping node counts to produce the graph shown in Fig. 2, where the speedup is calculated with respect to the number of processed sub-grids per second on one node at level 14. The graph therefore shows a combination of weak scaling as the level of refinement increases and strong scaling for each refinement level as the node count increases. Weak scaling is clearly very good, with close to optimal improvements with successive refinement levels. Strong scaling tails off as the amount of sub-grids for each level becomes too small to generate sufficient work for all CPUs/GPUs. The performance difference between the number of sub-grids processed per second for the two parcelports increases with higher node counts and refinement level, a sure sign that communication is responsible for causing delays that prevent the processing cores from getting work done. Each increase in the refinement level can, due to AMR, increase the total number of grids by up to a factor of 8; see Tab. 4 for the actual values. This causes a near quadratic increase in the total number of halos exchanged. As the node count increases, the probability of a halo exchange increases linearly, and it is therefore no surprise that reduced communication latency leads to the large gains observed. The improvement in communication is due to all of the following changes: Network performance results • Explicit use of RMA for the transfer of halo buffers. • Lower latency on send and receive of all parcels and execution of RMA transfers. • Direct control of all memory copies for send/receive buffers between the HPX runtime and the libfabric driver. • Reduced overhead between receipt of a transfer/message completion event and subsequent setting of a ready future. • Thread-safe lock-free interface between the HPX scheduling loop and the libfabric API with polling for network progress/completions integrated into the HPX task scheduling loop. It is important to note that the timing results shown are for the core calculation steps that exchange halos, and the figures do not include regridding steps or I/O that also make heavy use of communication. Including them would further illustrate the effectiveness of the networking layer: Start-up timings of the main solver at refinement level 16 and 17 were in fact reduced by an order of magnitude using the libfabric parcelport, increasing the efficiency of refining the initial restart file of level 13 to the desired level of resolution. Note further that some data points at level 16 and 17 for large runs are missing as the start-up time consumed the limited node hours available to their execution. The communication speedups shown have not separately considered the effects of thread pools and the scheduling of network progress on the rates of injection or the handling of messages. When running on Piz Daint with 12 worker threads executing tasks, any thread might need to send data across the network. In general, the injection of data into send queues does not cause problems unless many threads are attempting to do so concurrently and the send queues are full. The receipt of data, however, must be performed by polling of completion queues. This can only take place in-between the execution of other tasks. Thus, if all cores are busy with work, no polling is done, and if no work is available, all cores compete for access to the network. The effects can be observed in Fig. 3 where the libfabric parcelport causes a slight reduction in performance for lower node counts. With GPUs doing most of the work, CPU cores can be reserved for network processing, and the job of polling can be restricted to a subset of cores that have no other (longer running) tasks to execute. HPX supports partitioning of a compute node into separate thread pools with different responsibilities; the effects of this will be investigated further to see whether reducing contention between cores helps to restore the lost performance. CONCLUSIONS AND FUTURE WORK As the core contributions of this paper, we have demonstrated node-level and distributed performance of Octo-Tiger, an astrophysics code simulating a stellar binary merger. We have shown excellent scaling up to the full system on Piz Daint and improved network performance based on the libfabric library. The high-level abstractions we employ, in particular HPX and Vc, demonstrate how portability in heterogeneous HPC systems is possible. This is the first time an HPX application was run on a full system of a GPU-accelerated supercomputer. This work also has several implications for parallel programming for future architectures. The asynchronous many-task runtime systems like HPX are a powerful, viable, and promising addition to the current landscape of parallel programming models. We show that it is not only possible to utilize these emerging tools to perform on the largest scales, but also that it might even be desirable to leverage the latency hiding, finer-grained parallelism and natural support for heterogeneity that the asynchronous many-task model exposes. In particular, we have significantly increased node-level performance of the originally most compute hungry part of Octo-Tiger, the gravitational solver. Our optimizations have demonstrated excellent node-level performance on different HPC compute nodes with heterogeneous hardware, including multi-GPU systems and KNL. We have achieved up to 37% of the peak performance on two NVIDIA V100 GPUs, and 17% of peak on a KNL system. To achieve high node-level performance for the full simulation, we will also port the remaining part, the hydrodynamics solver, to GPUs. The distributed scaling results have been obtained within a development project on Piz Daint and thus with severely limited compute time. The excellent results presented in this paper have already built the foundation for a production proposal that will enable us to target full-resolution simulations with impact on physics. Despite the significant performance improvement replacing MPI with libfabric, there are more networking improvements under development that have not been incorporated into Octo-Tiger yet. This includes the use of user-controlled RMA buffers that allow the user to instruct the runtime that certain memory regions will be used repeatedly for communication (and thus amortize memory pinning/registration costs). Integration of such features into the channel abstraction may prove to reduce latencies further and is an area we will explore. With respect to the astrophysical application, we have already developed a radiation transport module for Octo-Tiger based on the two moment approach adapted by [48]. This will be required to simulate the V1309 merger with high accuracy. What remains is to fully debug and verify this module and to port the implementation to GPUs. Finally, our full-scale simulations will be able to predict the outcome of mergers that have not yet happened: These simulations will useful for comparison with future "red nova" contact-binary merger events. Two contact-binary systems have been suggested as future mergers, KIC 9832227 [40,49] and TY Pup [47]. Other candidate systems will be discovered with the new all-sky surveys such as the Zwicky Transient Facility (ZTF) and the Large Synoptic Survey Telescope (LSST).
8,475
1908.03121
2966129987
We study the simulation of stellar mergers, which requires complex simulations with high computational demands. We have developed Octo-Tiger, a finite volume grid-based hydrodynamics simulation code with Adaptive Mesh Refinement which is unique in conserving both linear and angular momentum to machine precision. To face the challenge of increasingly complex, diverse, and heterogeneous HPC systems, Octo-Tiger relies on high-level programming abstractions. We use HPX with its futurization capabilities to ensure scalability both between nodes and within, and present first results replacing MPI with libfabric achieving up to a 2.8x speedup. We extend Octo-Tiger to heterogeneous GPU-accelerated supercomputers, demonstrating node-level performance and portability. We show scalability up to full system runs on Piz Daint. For the scenario's maximum resolution, the compute-critical parts (hydrodynamics and gravity) achieve 68.1 parallel efficiency at 2048 nodes.
Adaptive multithreading systems such as HPX expose concurrency by using user-level threads. Some other notable solutions that take such an approach are Uintah @cite_2 , Chapel @cite_55 , Charm++ @cite_12 , Kokkos @cite_36 , Legion @cite_5 , and PaRSEC @cite_23 . Note that we only refer to distributed memory capable solutions, since we focus here on large distributed simulations. Different task-based parallel programming models, e.g. Cilk Plus, OpenMP, Intel TBB, Qthreads, StarPU, GASPI, Chapel, Charm++, and HPX, are compared in @cite_27 . Our requirements (distributed, task-based, asynchronous) are met by few, out of which HPX has the highest technology readiness level according to this review. It is furthermore the only one with a future-proof C++ standard conforming API and allows us to support the libfabric networking library without changing application code. For more details, see Sec. .
{ "abstract": [ "Abstract The manycore revolution can be characterized by increasing thread counts, decreasing memory per thread, and diversity of continually evolving manycore architectures. High performance computing (HPC) applications and libraries must exploit increasingly finer levels of parallelism within their codes to sustain scalability on these devices. A major obstacle to performance portability is the diverse and conflicting set of constraints on memory access patterns across devices. Contemporary portable programming models address manycore parallelism ( e.g. , OpenMP, OpenACC, OpenCL) but fail to address memory access patterns. The Kokkos C++ library enables applications and domain libraries to achieve performance portability on diverse manycore architectures by unifying abstractions for both fine-grain data parallelism and memory access patterns. In this paper we describe Kokkos’ abstractions, summarize its application programmer interface (API), present performance results for unit-test kernels and mini-applications, and outline an incremental strategy for migrating legacy C++ codes to Kokkos. The Kokkos library is under active research and development to incorporate capabilities from new generations of manycore architectures, and to address a growing list of applications and domain libraries.", "In this paper we consider productivity challenges for parallel programmers and explore ways that parallel language design might help improve end-user productivity. We offer a candidate list of desirable qualities for a parallel programming language, and describe how these qualities are addressed in the design of the Chapel language. In doing so, we provide an overview of Chapel's features and how they help address parallel productivity. We also survey current techniques for parallel programming and describe ways in which we consider them to fall short of our idealized productive programming model.", "Task-based programming models for shared memory—such as Cilk Plus and OpenMP 3—are well established and documented. However, with the increase in parallel, many-core, and heterogeneous systems, a number of research-driven projects have developed more diversified task-based support, employing various programming and runtime features. Unfortunately, despite the fact that dozens of different task-based systems exist today and are actively used for parallel and high-performance computing (HPC), no comprehensive overview or classification of task-based technologies for HPC exists. In this paper, we provide an initial task-focused taxonomy for HPC technologies, which covers both programming interfaces and runtime mechanisms. We demonstrate the usefulness of our taxonomy by classifying state-of-the-art task-based environments in use today.", "New high-performance computing system designs with steeply escalating processor and core counts, burgeoning heterogeneity and accelerators, and increasingly unpredictable memory access times call for one or more dramatically new programming paradigms. These new approaches must react and adapt quickly to unexpected contentions and delays, and they must provide the execution environment with sufficient intelligence and flexibility to rearrange the execution to improve resource utilization. The authors present an approach based on task parallelism that reveals the application's parallelism by expressing its algorithm as a task flow. This strategy allows the algorithm to be decoupled from the data distribution and the underlying hardware, since the algorithm is entirely expressed as flows of data. This kind of layering provides a clear separation of concerns among architecture, algorithm, and data distribution. Developers benefit from this separation because they can focus solely on the algorithmic level without the constraints involved with programming for current and future hardware trends.", "Describes Uintah, a component-based visual problem-solving environment (PSE) that is designed to specifically address the unique problems of massively parallel computation on tera-scale computing platforms. Uintah supports the entire life-cycle of scientific applications by allowing scientific programmers to quickly and easily develop new techniques, debug new implementations and apply known algorithms to solve novel problems. Uintah is built on three principles: (1) as much as possible, the complexities of parallel execution should be handled for the scientist, (2) the software should be reusable at the component level, and (3) scientists should be able to dynamically steer and visualize their simulation results as the simulation executes. To provide this functionality, Uintah builds upon the best features of the SCIRun (Scientific Computing and Imaging Run-time) PSE and the DoE (Department of Energy) Common Component Architecture (CCA).", "Modern parallel architectures have both heterogeneous processors and deep, complex memory hierarchies. We present Legion, a programming model and runtime system for achieving high performance on these machines. Legion is organized around logical regions, which express both locality and independence of program data, and tasks, functions that perform computations on regions. We describe a runtime system that dynamically extracts parallelism from Legion programs, using a distributed, parallel scheduling algorithm that identifies both independent tasks and nested parallelism. Legion also enables explicit, programmer controlled movement of data through the memory hierarchy and placement of tasks based on locality information via a novel mapping interface. We evaluate our Legion implementation on three applications: fluid-flow on a regular grid, a three-level AMR code solving a heat diffusion equation, and a circuit simulation.", "We describe Charm++, an object oriented portable parallel programming language based on C++. Its design philosophy, implementation, sample applications and their performance on various parallel machines are described. Charm++ is an explicitly parallel language consisting of C++ with a few extensions. It provides a clear separation between sequential and parallel objects. The execution model of Charm++ is message driven, thus helping one write programs that are latency-tolerant. The language supports multiple inheritance, dynamic binding, overloading, strong typing, and reuse for parallel objects, all of which are more difficult problems in a parallel context. Charm++ provides specific modes for sharing information between parallel objects. It is based on the Charm parallel programming system, and its runtime system implementation reuses most of the runtime system for Charm." ], "cite_N": [ "@cite_36", "@cite_55", "@cite_27", "@cite_23", "@cite_2", "@cite_5", "@cite_12" ], "mid": [ "2078794610", "2090409324", "2783728688", "2087440962", "1858694651", "2036551003", "2079577430" ] }
From Piz Daint to the Stars: Simulation of Stellar Mergers using High-Level Abstractions
Astrophysical simulations are among the classical drivers for exascale computing. They require multiple scales of physics and cover vast scales in space and time. Even the next generation of high-performance computing (HPC) systems will be insufficient to solve more than a fraction of the many conceivable scenarios. However, new HPC systems come not only with ever larger processor counts, but increasingly complex, diverse, and heterogeneous hardware. Evolving manycore architectures and GPUs are combined with multicore systems. This raises challenges especially for large-scale HPC simulation codes and requires going beyond traditional programming models. High-level abstractions are required to ensure that codes are portable and can be run on current HPC systems without the need to rewrite large portions of the code. We consider the simulation of stellar phenomena based on the simulation framework Octo-Tiger. In particular, we study the simulation of time-evolving stellar mergers (Fig. 1). The study of binary star evolution from the onset of mass transfer to merger can provide fundamental insight into the underlying physics. In 2008, this phenomenon was observed with photometric data, when the contact binary V1309 Scorpii merged to form a luminous red novae [58]. The vision of our work is to model this event with simulations on HPC systems. Comparing the results of our simulations with the observations will enable us to validate the model and to improve our understanding of the physical processes involved. Octo-Tiger is an HPC application and relies on high-level abstractions, in particular, HPX and Vc. While HPX provides scheduling and scalability, both between nodes and within, Vc ensures portable vectorization across processorbased platforms. To make use of GPUs we use HPX's CUDA integration in this work. Previous work has demonstrated scalability on Cori, a Cray XC40 system installed at the National Energy Research Scientific Computing Center (NERSC) [27]. However, the underlying node-level performance was rather low, and they were only able to simulate for few time steps. Consequently, they had started to study node-level performance, achieving 408 GFLOPS on the 64 cores of the Intel Knights Landing manycore processor [45]. Using the same high-level abstractions as on multicore systems, this led to a speedup of 2 compared to a 24-core Intel Skylake-SP platform. In this work, we make use of the same CPU level abstraction library Vc [31] for SIMD vector parallelism as in the previous study, but extend Octo-Tiger to support GPU-based HPC machines. We show how the critical node-level bottleneck, the fast multipole method (FMM) kernels, can be mapped to GPUs. Our approach utilizes GPUs as co-processors, running up to 128 FMM kernels on each one simultaneously. This was implemented using CUDA streams and uses HPX's futurization approach for lock-free, low-overhead scheduling. We demonstrate the performance-portability of Octo-Tiger for a set of GPU and processor-based HPC nodes. To scale more efficiently to thousands of nodes, we have integrated a new libfabric communication backend into HPX where it can be used transparently by Octo-Tiger -the first large scientific application to use the new network layer. The libfabric implementation extensively uses one-sided communication to reduce the overhead compared to a standard two-sided MPI-based backend. To demonstrate both our node-level GPU capabilities as well as our improved scalability with libfabric, we show results for full-scale runs on Piz Daint running the real-world stellar merger scenario of V1309 Scorpii for a few time-steps. Piz Daint is a Cray XC40/XC50 equipped with NVIDIA's P100 GPUs at the Swiss National Supercomputing Centre (CSCS). For our full system runs we used up to 5400 out of 5704 nodes. This is the first time an HPX application was run on a full system of a GPU-accelerated supercomputer. In Sec. 2 we briefly discuss related approaches. We describe the stellar scenario in more detail in Sec. 3, the important parts of the overall software framework and the high-level abstractions they provide in Sec. 4. In turn, Sec. 5 shows the main contributions of this work, describing both the new libfabric parcelport and the way we utilize GPUs to accelerate the execution of critical kernels. In Sec. 6.1, we present our node-level performance results for NVIDIA GPUs, Intel Xeons and an Intel Xeon Phi platform. Section 6.2 describes our scaling results, showing that we are able to scale with both an MPI communication backend and a libfabric communication backend of HPX. We show that the use of libfabric strongly improves performance at scale. SCENARIO: STELLAR MERGERS In September 2008, the contact binary, V1309 Scorpii, merged to form a luminous red novae (LRN) [58]. The Optical Gravitational Lensing Experiment (OGLE) observed this binary prior to its merger, and six years of data show its period decreasing. When the merger occurred, the system increased in brightness by a factor of about 5000. Mason et al. [39] observed the outburst spectroscopically, confirming it as a LRN. This was the first observed stellar merger of a contact binary with photometric data available prior to its merger. Possible progenitor systems for V1309 Scorpii, consisting initially of zero-age main sequence stars with unequal masses in a relatively narrow range, were proposed by Stepien in [50]. As the heavier of the two stars first begins to expand into a red giant, it transfers mass to its lower mass companion, forming a common envelope. The binary's orbit shrinks due to friction, and the mass transfer slows down as the companion becomes the heavier of the two stars but continues to grow at the expense of the first star. Eventually this star also expands, with both stars now touching each other forming a contact binary. Stepien et. al. sampled the space of physically possible initial masses, finding that initial primary masses of between 1.1 ⊙ and 1.3 ⊙ and initial secondary masses between 0.5 ⊙ and 0.9 ⊙ produced results consistent with observations prior to merger. The evolution described above results in an approximately 1.52 − 1.54 ⊙ primary and a 0.16 − 0.17 ⊙ secondary with helium cores and Sun-like atmospheres. It is theorized that the merger itself was due to the Darwin instability. When the total spin angular momentum of a binary system exceeds one third of its orbital angular momentum, the system can no longer maintain tidal synchronization. This results in a rapid tidal disruption and merger. Octo-Tiger uses its Self-Consistent Field module [20,23] to produce an initial model for V1309 to simulate this last phase of dynamical evolution. The stars are tidally synchronized, and the stars have a common atmosphere. The system parameters are chosen such that the spin angular momentum just barely exceeds one third of the orbital angular momentum. Octo-Tiger begins the simulation just as the Darwin instability sets in (Fig. 1). SOFTWARE FRAMEWORK 4.1 HPX We have developed the Octo-Tiger application framework [52] in ISO C++11 using HPX [24-26, 28, 29, 51]. HPX is a C++ standard library for distributed and parallel programming built on top of an Asynchronous Many Task (AMT) runtime system. Such AMT runtimes may provide a means for helping programming models to fully exploit available parallelism on complex emerging HPC architectures. The HPX methodology described here includes the following essential components: • An ISO C++ standard conforming API that enables waitfree asynchronous parallel programming, including futures, channels, and other primitives for asynchronous execution. • An Active Global Address Space (AGAS) that supports load balancing via object migration and enables exposing a uniform API for local and remote execution. • An active-message networking layer that enables running functions close to the objects they operate on. This also implicitly overlaps computation and communication. • A work-stealing lightweight task scheduler that enables finer-grained parallelization and synchronization and automatic load balancing across all local compute resources. • APEX, an in-situ profiling and adaptive tuning framework. The design features of HPX allow application developers to naturally use key parallelization and optimization techniques, such as overlapping communication and computation, decentralizing control flow, oversubscribing execution resources, and sending work to data instead of data to work. As a result Octo-Tiger achieves exceptionally high system utilization and exposes very good weak-and strong scaling behaviour. HPX exposes an asynchronous, standards conforming programming model enabling Futurization, with which developers can express complex dataflow execution trees that generate billions of HPX tasks that are scheduled to execute only when their dependencies are satisfied [27]. Also, Futurization enables automatic parallelization and load-balancing to emerge. Additionally, HPX provides a performance counter and adaptive tuning framework that allows users to access performance data, such as core utilization, task overheads, and network throughput; these diagnostic tools were instrumental in scaling Octo-Tiger to the full machine. This paper demonstrates the viability of the HPX programming model at scale using Octo-Tiger, a portable and standards conforming application. Octo-Tiger fully embraces the C++ Parallel Programming Model, including additional constructs that are incrementally being adopted into the ISO C++ Standard. The programming model views the entire supercomputer as a single C++ abstract machine. A set of tasks operates on a set of C++ objects distributed across the system. These objects interact via asynchronous function calls; a function call to an object on a remote node is relayed as an active message to that node. A powerful and composable primitive, the future object represents and manages asynchronous execution and dataflow. A crucial property of this model is the semantic and syntactic equivalence of local and remote operations. This provides a unified approach to intra-and inter-node parallelism based on proven generic algorithms and data structures available in today's ISO C++ Standard. The programming model is intuitive and enables performance portability across a broad spectrum of increasingly diverse HPC hardware. Octo-Tiger Octo-Tiger simulates the evolution of mass density, momentum, and energy of interacting binary stellar systems from the start of mass transfer to merger. It also evolves five passive scalars. It is a three-dimensional finite-volume code with Newtonian gravity that simulates binary star systems as selfgravitating compressible inviscid fluids. To simulate these fluids we need three core components: (1) a hydrodynamics solver, (2) a gravity solver that calculates the gravitational field produced by the fluid distribution, and (3) a solver to generate an initial configuration of the star system. The passive scalars, expressed in units of mass density, are evolved using the same continuity equation that describes the evolution of the mass density. They do not influence the flow itself, but are rather used to track various fluid fractions as the system evolves. In the case of V1309, these scalars are initialized to the mass density of the accretor core, the accretor envelope, the donor core, the donor envelope, and the common atmosphere between the two stars. The passive scalars are useful in post-processing. For instance, to compute the temperature we require the mass and energy densities as well as the number density. The latter is not evolved in the simulation, but can be computed from the passive scalars assuming a composition for each fraction (e.g. helium for both cores, and a solar composition for the remaining fractions). The balance of angular momentum plays an important role in the orbital evolution of binary systems. Three-dimensional astrophysical fluid codes with self-gravity do not typically conserve angular momentum. The magnitude of this violation is dependent on the particular problem and resolution. Previous works have found relative violations as high as 10 −3 per orbit [16,38,41]. This error, accumulated over several dozen orbits, becomes significant enough to influence the fate of the system. Octo-Tiger conserves both linear and angular momenta to machine precision. In the fluid solver, this is accomplished using a technique described by [18], while the gravity solver uses our own extension to the FMM. Octo-Tiger's main datastructure is a rotating Cartesian grid with adaptive mesh refinement (AMR). It is based on an adaptive octree structure. Each node is an 3 sub-grid (with = 8 for all runs in this paper) containing the evolved variables, and can be further refined into eight child nodes. Each octree node is implemented as an HPX component. These octree nodes are distributed onto the compute nodes using a space filling curve. For further information about implementational details we refer to [45] and [37]. The first solver that operates on this octree is a finite volume hydrodynamics solver. Octo-Tiger uses the central advection scheme of [32]. The piece-wise parabolic method (PPM) [13] is used to compute the thermodynamic variables at cell faces. A method detailed by [38] is used to conserve total energy in its interaction with the gravitational field. This technique involves applying the advection scheme to the sum of gas kinetic, internal, and potential energies, resulting in conservation of the total energy. Numerical precision of internal energy densities can suffer greatly in high mach flows, where the kinetic energy dwarfs the gas internal energy. We use the dual-energy formalism of [10] to overcome this issue: We evolve both the gas total energy as well as the entropy. The internal energy is then computed from one or the other depending on the mach number (entropy for high mach flows and total gas energy for low mach ones). The angular momentum technique described by [18] is applied to the PPM reconstruction. It ads a degree of freedom to the reconstruction of velocities on cell faces by allowing for the addition of a spatially constant angular velocity component to the linear velocities. This component is determined by evolving three additional variables corresponding to the spin angular momentum for a given cell. The gravitational field solver is based on the FMM. Octo-Tiger is unique in conserving both linear and angular momentum simultaneously and at scale using modifications to the original FMM algorithm [36,37]. Finally, we assemble the initial scenario using the Self-Consistent Field technique alongside the FMM solver. Octo-Tiger can produce initial models for binary systems that are in contact, semi-detached, or detached [37]. Calculated only once, the computational demands of this solver will be negligible for full-size runs. We used a test suite of four verification tests, recommended by Tasker et al. [56] for self-gravitating astrophysical codes, to verify the correctness of our results. The first two are purely hydrodynamic tests: the Sod shock tube and the Sedov-Taylor blast wave. Both have analytical solutions which we can use for comparisons. The third and fourth test are a globular star cluster in equilibrium and one in motion. In each case, the equilibrium structure should be retained. Because Octo-Tiger is intended to simulate individual stars self-consistently, we have substituted a single star in equilibrium at rest for the third test and a single star in equilibrium in motion for the fourth test. The FMM hotspot The most compute-intensive task is the calculation of the gravitational field using the FMM, since this has to be done for each of the fluid-solver time-steps. Note that our FMM variant differs from approaches such as the implementation used in [61]. While being distributed and GPU-capable, their FMM is operating upon particles. Our FMM variant operates on the grid cells directly since each grid cell has a density value which determines its mass, and thus its gravitational influence on other cells. We further differ from other (cellbased) FMM variants used for computing gravitational fields by conserving not only linear momentum, but also angular momentum, down to machine precision using the changes outlined in [36]. Due to its computational intensity, we will take a closer look at the FMM and its kernels in this section. The FMM algorithm consists of three steps. First, it computes the multipole moments and the center-of-masses of the individual cells. This information is then used to calculate Taylor-series expansions coefficients in the second and third steps. These coefficients can in turn be used to approximate the gravitational potential in a cell, which can then be used by the hydrodynamics solver [37]. The first of the three FMM steps requires a bottom up traversal of the octree datastructure. The fluid density of the cells of the highest level is the starting point. The multipole moments of every other cell are then calculated using the multipole moments of its child cells. We can additionally compute the center of mass for each refined cell. While this step includes a tree-traversal, it is not very compute intensive. In the second FMM step (same-level), we use the multipole moments and the center-of-masses to calculate how much the gravity in each cell is influenced by its neighboring cells on the same octree level. How many cells are considered as "neighboring" is determined by the so-called opening criteria [37]. However, their number is constant on each level. The result of these interactions is a Taylor series expansion of interactions. This is the most compute-intensive part. In the third FMM step, the gravitational influence of cells outside of the opening criteria is computed, and the octree is traversed top-down. The respective Taylor series expansion of the parent node is passed to the child nodes and accumulated. In the first and third step we calculate interactions between either child nodes and their respective parents or vice-versa. Since a refined node only has 8 children, the number of these interactions is limited. In the second step, the number of same-level interactions per cell that need to be calculated is much higher. For our choice of parameters, each cell interacts with 1074 of its close neighbors, assuming they exist. The second FMM step (same-level interactions) is by far the most compute-intense part. Originally, it required about 70% of the total scenario runtime and was thus the core focus of previous optimizations. Originally, lookup of close neighbor cells was performed using an interaction list, and data was stored in an array-of-struct format. In order to improve cache-efficiency and vector-unit usage, we changed it to a stencil-based approach and are now utilizing a structof-arrays datastructure. Compared to the old interactionlist approach, this led to a speedup of the total application runtime between 1.90 and 2.22 on AVX512 CPUs and between 1.23 and 1.35 on AVX2 CPUs [15]. Furthermore, we achieved node-level scaling as well as performance portability between different CPU architectures through the usage of Vc [15,45]. After these optimizations, the FMM required only about 40% (depending on the hardware) of the total scenario runtime with its compute kernels reaching a significant fraction of peak on multiple platforms as we will demonstrate in Sect. 6.1. Due to the presence of AMR, there are four different cases of same-level interactions: 1) multipole-monopole interactions between cells of a refined octree node (multipoles) and cells of a non-refined octree node (monopoles); 2) multipolemultipole interactions; 3) monopole-monopole interactions; and 4) monopole-multipole interactions. This yields four kernels per octree-node. Their input data are the current node's sub-grid as well as all sub-grids of all neighboring nodes as a halo (ghost layer). The kernels then compute all interactions of a certain type and add the result to the Taylor coefficients of the respective cells in the sub-grid. We were able to combine the multipole-multipole and the multipole-monopole kernels into a single kernel, yielding three compute kernels in our implementation. As the monopole-multipole kernel consumes only about 2% of the total runtime, we ignore it in the following. The remaining two compute kernels, 1)/2) and 3), are the central hotspots of the application. Each kernel launch applies a 1074 element stencil for each cell of the octree's sub-grid. As we have 3 = 512 cells per sub-grid, this results in 549 888 interactions per kernel launch. Depending on the interaction type, each of those interactions requires a different number of floating point operations to be executed. For monopole-monopole interactions we execute 12 floating point operations per interaction, and for multipole-multipole/monopole interaction 455 floating point operations. More information about the kernels can be found in [45]; however, the number of floating operations per monopole interaction differs slightly there as we combined the two monopole-X kernels there. IMPROVING OCTO-TIGER USING HIGH-LEVEL ABSTRACTIONS Running an irregular, adaptive application like Octo-Tiger on a heterogeneous supercomputer like Piz Daint presents challenges: The pockets of parallelism contained in each octree node must be run efficiently on the GPU, despite the relatively small number of cells in each sub-grid. The GPU implementation should not degrade parallel efficiency through overheads such as work aggregation, CPU/GPU synchronization, or blocked CPU threads. Furthermore, we expect the implementation to behave as before, with the exception of faster GPU execution of tasks. In this section, we first present our implementation and integration of FMM GPU kernels into the task flow using HPX CUDA futures as a high-level abstraction. We then introduce the libfabric parcelport and show how this new communication layer improves scalability of Octo-Tiger by taking advantage of HPX's communications abstractions. Asynchronous Many Tasks with GPUs As our FMM implementation is stencil-based and uses a struct-of-arrays datastructure, the FMM kernels introduced in Section 4.3 are very amenable to GPU execution. Each kernel executes a 1074 element stencil on the 512 cells of the 8x8x8 sub-grid of an octree node, calculating the gravitational interactions of each cell with its 1074 neighbors. We parallelize over the cells of the sub-grid, launching kernels with 8 blocks, each containing 64 CUDA threads which execute the whole stencil for each cell. The stencil-based computation of the interactions between two cells is done the same way as on the CPU. In fact, since we use Vc datatypes for vectorization on the CPU, we can simply instance the same function template (that computes the interaction between two cells) with scalar datatypes and call it within the GPU kernel. GPU-specific optimizations are done in a wrapper around this cell-to-cell method and the loop over the stencil elements. This wrapper includes the usual CUDA optimizations such as shared and constant memory usage. Thus far we have used standard CUDA to create rather normal kernels for the FMM implementation. However, these kernels alone suffer from two major issues: As it stands, the execution of a GPU kernel would block the CPU thread launching it, no other task would be scheduled or executed whilst it runs. As Octo-Tiger relies on having thousands of tasks available simultaneously for scalability, this presents a problem. The second issue is obvious when looking at the size of the workgroups and the number of blocks for each GPU kernel launch mentioned above. The GPU kernels do not expose enough parallelism to fully utilize a GPU such as the NVIDIA P100 using only small workgroups and 8 blocks per kernel. To solve these two issues, we provide an HPX abstraction for CUDA streams. For any CUDA stream event we create an HPX future that becomes ready once operations in the stream (up to the point of the event/future's creation) are finished. Internally, this is created using a CUDA callback function that sets the future ready [24]. This seemingly simple construct allows us to fully integrate CUDA kernels within the HPX runtime, as it provides a synchronization point for the CUDA stream that is compatible with the HPX scheduler. It yields multiple immediate advantages: • Seamless and automatic execution of kernels and overlapping of CPU/GPU tasks; • overlapping of computation and communication as some HPX tasks are related to the communication with other compute nodes; and • CPU/GPU data synchronization -completed GPU kernels triggering the scheduler, signal access to buffers that can be used/copied. Furthermore, the integration is mostly non-invasive since a CUDA kernel invocation now equates to a function call returning a future. The rest of the kernel implementation and the (asynchronous) buffer handling uses the normal CUDA API, thus the GPU kernels themselves can still be hand-optimized. Nonetheless, this integration alone does not solve the second issue: The kernels are too fine-grained to fully utilize the GPUs. Conventional approaches to solve this include work aggregation and execution models where CUDA kernels can call other kernels and coalesce execution. Unfortunately, work aggregation schemes, as described in [42], do not fit our task-based approach. Individual kernels should finish as soon as possible in order to trigger dependent ones, such as communication with other nodes or the third FMM step; delays in launching these may lead to a degradation of parallel efficiency. Recursively calling other GPU kernels as in [59] poses a similar problem as we would traverse the octree on the GPU, making communication calls more difficult. Furthermore, we would like to run code on the appropriate device; tree traversals on the CPU, and processing of the octree kernels on the GPU. Here, however, we can exploit the fact that the execution of GPU kernels is just another task to the HPX runtime system: We launch a multitude of different GPU kernels concurrently on different streams with each CPU thread handling multiple CUDA streams, and thus multiple GPU kernels concurrently. Normally, this would present problems for CPU/GPU synchronization as GPU results are needed for other CPU tasks. But the continuation passing style of program execution in HPX, chaining dependent tasks onto futures, makes this trivial. When a GPU kernel output (or data transfer) that has not yet finished is needed for a task, the runtime assigns different work to the CPU and schedules the dependent tasks when the GPU future becomes ready. When the number of concurrent GPU tasks running matches the total number of available CUDA streams (usually 128 per GPU), new kernels are instead executed as CPU tasks until a CUDA stream becomes empty again. In summary, the octree is traversed on the CPU, with tasks spawned asynchronously for kernels on the GPU or CPU returning futures for each. Any tasks that require results from previous ones are attached as continuations to the futures. The CPU is continuously supplied with new work (including communication tasks) as futures complete. Since all CPU threads may participate in traversal and steal work from each other, we keep the GPU busy by nature of the sheer number of concurrent GPU kernels submitted. Octo-Tiger is the first application to use HPX CUDA futures. It is in fact an ideal fit for this kind of GPU integration: Parallelization is possible only within individual timesteps of the application, and a production run simulation will require tens of thousands of them, making it is essential to maximize parallel efficiency (as well as proper GPU usage), particularly as each timestep might run for a fraction of a second on the whole machine overall. The fine-grained approach of GPU usage presented here fits these challenges perfectly. In Section 6 we show how this model performs. We run a real-world scenario for a few timesteps to both show that we achieve a significant fraction of GPU peak performance during the execution of the FMM, as well as scalability on the whole Piz Daint machine, each of the 5400 compute nodes using a NVIDIA P100 GPU. Thus, Octo-Tiger also serves as a proof as concept, showing that large, tree-based applications containing pockets of parallelism can efficiently run finegrained parallelism tasks on the GPU without compromising scalability with HPX. Active messages and libfabric parcelport The programming model of HPX does not rely on the user matching network sends and receives explicitly as one would do with MPI. Instead, active messages are used to transfer data and trigger a function on a remote node; we refer to the triggering of remote functions with bound arguments as actions and the messages containing the serialized data and remote function as parcels [7]. A halo exchange, for example, written using MPI involves a receive operation posted on one node and a matching send on another. With non-blocking MPI operations, the user may check for readiness of the received data at a convenient place in the code and then act appropriately. With blocking ones, the user must wait for the received data and can only continue as soon as it arrives. With HPX, the same halo exchange may be accomplished by creating a future for some data on the receiving end, and having the sending end trigger an action that sets the future ready with the contents of the parcel data. Since futures in HPX are the basic synchronization primitive for work, the user may attach a continuation to the receive data to start the next calculation that depends on it. The user does not therefore have to perform any test for readiness of the received data: When it arrives, the runtime will set the future and schedule whatever work depends upon it automatically. This combines the convenience of both a blocking receive to trigger work, with an asynchronous receive that allows the runtime to continue whilst waiting. The asynchronous send/receive abstraction in HPX has been extended with the concept of a channel that the receiving end may fetch futures from (for timesteps ahead if desired) and the sending end may push data into as it is generated. Channels are set up by the user similar to MPI communicators; however, the handles to channels are managed by AGAS (Sect. 4.1). Even when a grid cell is migrated from one node to another during operation, the runtime manages the updated destination address transparently, allowing the user code to send data to the relocated grid with minimal disruption. These abstractions greatly simplify user level code and allow performance improvements in the runtime to be propagated seamlessly to all places that use them. The default messaging layer in HPX is built on top of the asynchronous two-sided MPI API and uses Isend/Irecv within the parcel encoding and decoding steps of action transmission and execution. HPX is designed from the ground up to be multi-threaded, avoid locking/waiting, and instead suspend tasks and execute others as soon as any blocking activity takes place. Although MPI supports multi-threaded applications, it has its own internal progress/scheduling management and locking mechanisms that interfere with the smooth running of the HPX runtime. The scheduling in MPI is in turn built upon the network provider's asynchronous completion queue handling and multi-threaded support which may also use OS level locks that suspend threads (and thus impede HPX progress). The HPX parcel format is more complex than a simple MPI message, but the overheads of packing data can be kept to a minimum [7] by using remote memory access (RMA) for transfers. All user/packed data buffers larger than the eager message size threshold are encoded as pointers and exchanged between nodes using one-sided RMA put/get operations. Switching HPX to use the one-sided MPI RMA API is no solution as this involves memory registration/pinning that is passed through to the provider level API, causing additional (unwanted) synchronization between user code, MPI code, and the underlying network/fabric driver. Bypassing MPI and using the network API directly to improve performance was seen as a way of decreasing latency, improving memory management, simplifying the parcelport code, and better integrating the multi-threaded runtime with the communications layer. Libfabric was chosen as it has an ideal API that is supported on many platforms, including Cray machines via the GNI provider [46]. The purely asynchronous API of libfabric blends seamlessly with the asynchronous internals of HPX. Any task scheduling thread may poll for completions in libfabric and set futures to received data without any intervening layer. A one-to-one mapping of completion events to ready futures is possible for some actions, and dependencies for those futures can be immediately scheduled for execution. We expose pinned memory buffers for RMA to libfabric via allocators in the HPX runtime, so that internal data copying between user buffers (halos for example) and the network is minimized. When dealing with GPUs capable of multi TFlop performance, even delays of the order of microseconds in receiving data and subsequent task launches translates to a significant loss of compute capability. Note that with the HPX API it is trivial to reserve cores for thread pools dedicated to background processing of the network separate from normal task execution to further improve performance, but this has not yet been attempted with the Octo-Tiger code. Our libfabric parcelport uses only a small subset of the libfabric API but delivers very high performance as we demonstrate in Sect. 6 MPI. Similar gains could probably be made using the MPI RMA API, but this would require a much more complex implementation. It is a significant contribution of this work that we have demonstrated that an application may benefit from significant performance improvements in the runtime without changing a single line of the application code. This has been achieved utilizing abstractions for task management, scheduling, distribution, and messaging. It is generally true of any library that improvements in performance will produce corresponding improvements in code using it. But switching a large codebase to one-sided or asynchronous messaging is usually a major operation that involves redesigns of significant portions to handle synchronization between previously isolated (or sequential) sections. The unified futurized and asynchronous API of HPX provides a unique opportunity to take advantage of improvements at all levels of parallelism throughout a code as all tasks are naturally overlapped. And network bandwidth and latency improvements reduce waiting not only for remote data, but the effects of improved scheduling of all messages (synchronization of remote tasks as as well as direct data movement) directly impacts and improves on-node scheduling and thus benefits all tasks. RESULTS The initial model of our V1309 simulation includes a 1.54 ⊙ primary and a 0.17 ⊙ secondary. Each have helium cores and solar composition envelopes, and there is a common envelope surrounding both stars. The simulation domain is a cubic grid with edges 1.02 × 10 3 ⊙ long. This is about 160 times larger than the initial orbital separation, providing space for any mass ejected from the system. The sub-grids are 8 × 8 × 8 grid cells. The centers of mass of the components are 6.37 ⊙ apart. The grid is rotating about the z-axis with a period of 1.42 days, corresponding to the initial period of the binary. For the level 14 run, both stars are refined down to 12 levels, with the core of the accretor and donor refined to 13 and 14 levels respectively. The 15, 16, and 17 level runs are successively refined one more level in each refinement regime. At the finest level, each grid cell is 7.80 × 10 −3 ⊙ in each dimension for level 14, down to 9.750 × 10 −4 ⊙ for level 17. Although available compute time allowed us only to simulate a few time-steps for this work, this is exactly the production scenario we aim for. For all obtained results, the software dependencies in Table 1 were used to build Octo-Tiger (d6ad085) on the various platforms. FMM Node-Level Performance In the following, we will take a closer look at the performance of the FMM kernels, discussed in Sect. 4.3 and 5.1, on both GPUs and different CPU platforms. We will first explain how we made measurements and then discuss the results. 6.1.1 Measuring the Node-Level Performance. Measuring the node-level results for the FMM solver alone presents several challenges. Instead of a few large kernels, we are executing millions of small FMM kernels overall. Additionally, one FMM kernel alone will never utilize the complete device. On the CPU, each FMM kernel is executed by just one core. We cannot assume that the other cores will always be busy executing an FMM kernel as well. On the GPU, one kernel will utilize only up to 8 Streaming Multiprocessors (SM). The NVIDIA P100 GPU contains 56 of these SMs, each of which is analogue to a SIMD-enabled processor core. In order to see how well we utilize the given hardware with the FMM kernels, we focus not on the performance of a single kernel. We rather focus on the overall performance while computing the gravity during the GPU-accelerated FMM part of the code. In order to calculate both the GFLOP/s and the fraction of peak performance, we need to know the number of floating point operations executed while calculating the gravity, as well as the time required to do so. The first piece of information is easy to collect. Each FMM kernel always executes a constant number of floating point operations. We count the number of kernel launches in each HPX thread and accumulate this number until the end of the simulation. We can further record whether a kernel was executed on the CPU or the GPU. Due to the interleaving of kernels and the general lack of synchronization points between the gravity solver and the fluid solver, the amount of runtime spent in the FMM solver is more difficult to obtain. To measure it, we run the simulation multiple times; first, on the CPU without any GPUs. We collect profiling data with perf to get an estimation of the fraction of the runtime spent within the FMM kernels and thus the gravity solver. With this information we calculate the fraction of the runtime spent outside the gravity solver. Afterwards we repeat the run -without perf -and multiply its total runtime with the earlier obtained runtime fractions to get both the time spent in the gravity solver and the time spent in other methods. With this information, as well as the counters for the FMM kernel launches, we can now calculate the GFLOP/s achieved by the CPU when executing the FMM kernels. To get the same information for the GPUs, we include them in a third run of the same simulation. Using the GPUs, only the runtime of the gravity solver will improve since the rest of the code does not benefit from them. Thus, by subtracting the runtime spent outside of the FMM kernels in the CPU-only run from the total runtime of the third run, we can estimate the overall runtime of the GPU-enabled FMM kernels and with that the GFLOP/s we achieve overall during their execution. For all results in this work, we employ the same V1309 scenario and double precision calculations. The level 14 octree discretization considered here will serve as the baseline for scaling runs. Results. The results of our node-level runs can be found in Tab. 2. Switching to a stencil-based approach for the FMM instead of the old interaction-lists, the fraction of time spent in the two main FMM kernels shrank considerably. On the Intel Xeon E5-2660 v3 with 20 cores, they now only make up 38% of the total runtime. On the Intel Xeon Phi 7210 this difference is even higher, with the FMM only making up 20% of the total runtime. This is most likely due to the fact that the other less optimized parts of Octo-Tiger make fewer use of the SIMD capabilites that the Xeon Phi offers and are thus running a lot slower. This reduces the overall fraction of the FMM runtime compared to the rest of the code. Nevertheless, we achieve a significant fraction of peak performance on all devices. On the CPU-side, the Xeon Phi 7210 achieves the most GFLOP/s within the FMM kernels. Since it lowers its frequency to 1.1 GHz during AVX-intensive calculations, the real achieved fraction of peak performance may be significantly higher than 17%. We have assumed the base (unthrottled) clock rate shown in the table for calculating the theoretical peak performance of the CPU devices. Other than running a specific Vc version that supports AVX512 on Xeon Phi, we did not adapt the code. However, we attain a reasonable fraction of peak performance on this difficult hardware. On the AVX2 CPUs we reach about 30%. We tested GPU performance of the FMM kernels in multiple hardware configurations; we used either 10 or 20 cores in combination with either one or two V100 GPUs. Using two V100 GPUs, an insufficient number of cores affects performance. With 20 cores and two GPUs we achieve 37% of the combined V100 peak performance. Reducing to 10 cores, the performance drops to 22% of the peak. Then, the GPUs get starved of work, since the 10 cores have a lot of tasks to work on and cannot launch enough kernels on the GPU. Simultaneously, when utilizing one V100 GPU managed by 10 cores, we achieve 32% of peak performance on the GPU. But using one V100 with 20 CPU cores, the performance decreases, achieving only 22% peak: The number of threads used to fill the CUDA streams of the GPU directly affects the performance. This effect can be explained by the way we handle CUDA streams. Each CPU thread manages a certain number of CUDA streams. When launching a kernel, a thread first checks whether all of the CUDA streams it manages are busy. If not, the kernel will be launched on the GPU using an idle stream. Otherwise, the kernel will be executed on the CPU by the current CPU worker thread. Executing an FMM kernel on the CPU takes significantly longer than on the GPU, as one CPU kernel will be executed on one core. In a CPU-only setting all cores are working on FMM kernels of different octree nodes. With 20 cores and one V100, the CPU threads first fill all 128 streams with 128 kernel launches. Launching the next kernels, the GPU has not finished yet, and the CPU threads start to work on FMM kernels themselves. This leads to starvation of the GPU for a short period of time, as the CPU threads are not launching more work on the GPU in the meantime. Having two V100 offsets the problem, as the cores are less likely to work on the FMM themselves: It is more likely that there is a free CUDA stream available. We analyzed the number of kernels launched on the GPU to provide further data on this. Using 20 cores and one V100 we launch 97.4995% of all multipole-multipole FMM kernels on the GPU. Using 10 cores and one V100 this number increases to 99.9997%. Considering that a CPU FMM execution on one core takes longer than on the GPU and that during this time no other GPU kernels are launched in the meantime, the small difference in percentage can cause a large performance impact. This is a current limitation of our implementation and will be addressed in the next version of Octo-Tiger: There is no reason not to launch multiple FMM kernels in one stream if there is no empty stream available. This would lead to 100% of the FMM kernels launched on the GPU independent of the CPU hardware. Table 4: Number of tree nodes (sub-grids) per level of refinement (LoR) and the memory usage of the corresponding level. Since Piz Daint is our target system, we also evaluated performance on one of its nodes, using 128 CUDA streams. For comparison, 99.5207% of all multipole-multipole FMM kernels were launched on the GPU. We achieve about 21% of peak performance on the GPU. In summary, we were able to demonstrate that the uncommon approach of launching many small kernels is a valid way to utilize the GPU. Scaling results All of the presented distributed scaling results were obtained on Piz Daint at the Swiss National Supercomputing Centre. Table 3 lists the hardware configuration of Piz Daint. For the scalability analysis of Octo-Tiger different levels of refinement of the V1309 scenario were run, as shown in Tab. 4. A level 13 restart file, which takes less than an hour to generate on an Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz, was used as the basis for all runs. For all levels the restart file for level 13 was read and refined to higher levels of resolution through conservative interpolation of the evolved variables. The number of nodes was increased in powers of two 1, 2, 4, . . . up to 4096 nodes with a maximum of 5400 which corresponds to the full system on Piz Daint. All runs utilized 12 CPU cores on each node, i.e. up to 64, 800 cores for the full-system run. The simulations started at level 14, the smallest that fits on a single Piz Daint node with respect to memory while still consisting of an acceptable number of sub-grids to expose sufficient parallelism. The number of nodes was increased by a power of two until the scaling saturated due to too little work per node. Higher refinement levels were then run on the largest overlapping node counts to produce the graph shown in Fig. 2, where the speedup is calculated with respect to the number of processed sub-grids per second on one node at level 14. The graph therefore shows a combination of weak scaling as the level of refinement increases and strong scaling for each refinement level as the node count increases. Weak scaling is clearly very good, with close to optimal improvements with successive refinement levels. Strong scaling tails off as the amount of sub-grids for each level becomes too small to generate sufficient work for all CPUs/GPUs. The performance difference between the number of sub-grids processed per second for the two parcelports increases with higher node counts and refinement level, a sure sign that communication is responsible for causing delays that prevent the processing cores from getting work done. Each increase in the refinement level can, due to AMR, increase the total number of grids by up to a factor of 8; see Tab. 4 for the actual values. This causes a near quadratic increase in the total number of halos exchanged. As the node count increases, the probability of a halo exchange increases linearly, and it is therefore no surprise that reduced communication latency leads to the large gains observed. The improvement in communication is due to all of the following changes: Network performance results • Explicit use of RMA for the transfer of halo buffers. • Lower latency on send and receive of all parcels and execution of RMA transfers. • Direct control of all memory copies for send/receive buffers between the HPX runtime and the libfabric driver. • Reduced overhead between receipt of a transfer/message completion event and subsequent setting of a ready future. • Thread-safe lock-free interface between the HPX scheduling loop and the libfabric API with polling for network progress/completions integrated into the HPX task scheduling loop. It is important to note that the timing results shown are for the core calculation steps that exchange halos, and the figures do not include regridding steps or I/O that also make heavy use of communication. Including them would further illustrate the effectiveness of the networking layer: Start-up timings of the main solver at refinement level 16 and 17 were in fact reduced by an order of magnitude using the libfabric parcelport, increasing the efficiency of refining the initial restart file of level 13 to the desired level of resolution. Note further that some data points at level 16 and 17 for large runs are missing as the start-up time consumed the limited node hours available to their execution. The communication speedups shown have not separately considered the effects of thread pools and the scheduling of network progress on the rates of injection or the handling of messages. When running on Piz Daint with 12 worker threads executing tasks, any thread might need to send data across the network. In general, the injection of data into send queues does not cause problems unless many threads are attempting to do so concurrently and the send queues are full. The receipt of data, however, must be performed by polling of completion queues. This can only take place in-between the execution of other tasks. Thus, if all cores are busy with work, no polling is done, and if no work is available, all cores compete for access to the network. The effects can be observed in Fig. 3 where the libfabric parcelport causes a slight reduction in performance for lower node counts. With GPUs doing most of the work, CPU cores can be reserved for network processing, and the job of polling can be restricted to a subset of cores that have no other (longer running) tasks to execute. HPX supports partitioning of a compute node into separate thread pools with different responsibilities; the effects of this will be investigated further to see whether reducing contention between cores helps to restore the lost performance. CONCLUSIONS AND FUTURE WORK As the core contributions of this paper, we have demonstrated node-level and distributed performance of Octo-Tiger, an astrophysics code simulating a stellar binary merger. We have shown excellent scaling up to the full system on Piz Daint and improved network performance based on the libfabric library. The high-level abstractions we employ, in particular HPX and Vc, demonstrate how portability in heterogeneous HPC systems is possible. This is the first time an HPX application was run on a full system of a GPU-accelerated supercomputer. This work also has several implications for parallel programming for future architectures. The asynchronous many-task runtime systems like HPX are a powerful, viable, and promising addition to the current landscape of parallel programming models. We show that it is not only possible to utilize these emerging tools to perform on the largest scales, but also that it might even be desirable to leverage the latency hiding, finer-grained parallelism and natural support for heterogeneity that the asynchronous many-task model exposes. In particular, we have significantly increased node-level performance of the originally most compute hungry part of Octo-Tiger, the gravitational solver. Our optimizations have demonstrated excellent node-level performance on different HPC compute nodes with heterogeneous hardware, including multi-GPU systems and KNL. We have achieved up to 37% of the peak performance on two NVIDIA V100 GPUs, and 17% of peak on a KNL system. To achieve high node-level performance for the full simulation, we will also port the remaining part, the hydrodynamics solver, to GPUs. The distributed scaling results have been obtained within a development project on Piz Daint and thus with severely limited compute time. The excellent results presented in this paper have already built the foundation for a production proposal that will enable us to target full-resolution simulations with impact on physics. Despite the significant performance improvement replacing MPI with libfabric, there are more networking improvements under development that have not been incorporated into Octo-Tiger yet. This includes the use of user-controlled RMA buffers that allow the user to instruct the runtime that certain memory regions will be used repeatedly for communication (and thus amortize memory pinning/registration costs). Integration of such features into the channel abstraction may prove to reduce latencies further and is an area we will explore. With respect to the astrophysical application, we have already developed a radiation transport module for Octo-Tiger based on the two moment approach adapted by [48]. This will be required to simulate the V1309 merger with high accuracy. What remains is to fully debug and verify this module and to port the implementation to GPUs. Finally, our full-scale simulations will be able to predict the outcome of mergers that have not yet happened: These simulations will useful for comparison with future "red nova" contact-binary merger events. Two contact-binary systems have been suggested as future mergers, KIC 9832227 [40,49] and TY Pup [47]. Other candidate systems will be discovered with the new all-sky surveys such as the Zwicky Transient Facility (ZTF) and the Large Synoptic Survey Telescope (LSST).
8,475
1908.03121
2966129987
We study the simulation of stellar mergers, which requires complex simulations with high computational demands. We have developed Octo-Tiger, a finite volume grid-based hydrodynamics simulation code with Adaptive Mesh Refinement which is unique in conserving both linear and angular momentum to machine precision. To face the challenge of increasingly complex, diverse, and heterogeneous HPC systems, Octo-Tiger relies on high-level programming abstractions. We use HPX with its futurization capabilities to ensure scalability both between nodes and within, and present first results replacing MPI with libfabric achieving up to a 2.8x speedup. We extend Octo-Tiger to heterogeneous GPU-accelerated supercomputers, demonstrating node-level performance and portability. We show scalability up to full system runs on Piz Daint. For the scenario's maximum resolution, the compute-critical parts (hydrodynamics and gravity) achieve 68.1 parallel efficiency at 2048 nodes.
There are several particle-based FMM implementations utilizing task-based programming available. The approach described in @cite_15 uses the Quark runtime environment @cite_21 , the implementation in @cite_10 @cite_42 uses StarPu @cite_30 , whilst @cite_11 uses OpenMP @cite_35 , and @cite_16 compares Cilk @cite_3 , HPX-5, and OpenMP tasks @cite_52 . Our choice of HPX for the task-based runtime system is motivated by the same findings as the above mentioned review and the need to implement specialized kernels for energy conservation that require coupling between different parts of the solver.
{ "abstract": [ "In the field of HPC, the current hardware trend is to design multiprocessor architectures featuring heterogeneous technologies such as specialized coprocessors (e.g. Cell BE) or data-parallel accelerators (e.g. GPUs). Approaching the theoretical performance of these architectures is a complex issue. Indeed, substantial efforts have already been devoted to efficiently offload parts of the computations. However, designing an execution model that unifies all computing units and associated embedded memory remains a main challenge. We therefore designed StarPU, an original runtime system providing a high-level, unified execution model tightly coupled with an expressive data management library. The main goal of StarPU is to provide numerical kernel designers with a convenient way to generate parallel tasks over heterogeneous hardware on the one hand, and easily develop and tune powerful scheduling algorithms on the other hand. We have developed several strategies that can be selected seamlessly at run-time, and we have analyzed their efficiency on several algorithms running simultaneously over multiple cores and a GPU. In addition to substantial improvements regarding execution times, we have obtained consistent superlinear parallelism by actually exploiting the heterogeneous nature of the machine. We eventually show that our dynamic approach competes with the highly optimized MAGMA library and overcomes the limitations of the corresponding static scheduling in a portable way. Copyright © 2010 John Wiley & Sons, Ltd.", "", "", "Most high-performance, scientific libraries have adopted hybrid parallelization schemes - such as the popular MPI+OpenMP hybridization - to benefit from the capacities of modern distributed-memory machines. While these approaches have shown to achieve high performance, they require a lot of effort to design and maintain sophisticated synchronization communication strategies. On the other hand, task-based programming paradigms aim at delegating this burden to a runtime system for maximizing productivity. In this article, we assess the potential of task-based fast multipole methods (FMM) on clusters of multicore processors. We propose both a hybrid MPI+task FMM parallelization and a pure task-based parallelization where the MPI communications are implicitly handled by the runtime system. The latter approach yields a very compact code following a sequential task-based programming model. We show that task-based approaches can compete with a hybrid MPI+OpenMP highly optimized code and that furthermore the compact task-based scheme fully matches the performance of the sophisticated, hybrid MPI+task version, ensuring performance while maximizing productivity. We illustrate our discussion with the ScalFMM FMM library and the StarPU runtime system.", "", "Cilk (pronounced “silk”) is a C-based runtime system for multithreaded parallel programming. In this paper, we document the efficiency of the Cilk work-stealing scheduler, both empirically and analytically. We show that on real and synthetic applications, the “work” and “critical-path length” of a Cilk computation can be used to model performance accurately. Consequently, a Cilk programmer can focus on reducing the computation's work and critical-path length, insulated from load balancing and other runtime scheduling issues. We also prove that for the class of “fully strict” (well-structured) programs, the Cilk scheduler achieves space, time, and communication bounds all within a constant factor of optimal. The Cilk runtime system currently runs on the Connection Machine CM5 MPP, the Intel Paragon MPP, the Sun Sparcstation SMP, and the Cilk-NOW network of workstations. Applications written in Cilk include protein folding, graphic rendering, backtrack search, and the ★Socrates chess program, which won second prize in the 1995 ICCA World Computer Chess Championship.", "Fast multipole methods FMMs have ON complexity, are compute bound, and require very little synchronization, which makes them a favorable algorithm on next-generation supercomputers. Their most common application is to accelerate N-body problems, but they can also be used to solve boundary integral equations. When the particle distribution is irregular and the tree structure is adaptive, load balancing becomes a non-trivial question. A common strategy for load balancing FMMs is to use the work load from the previous step as weights to statically repartition the next step. The authors discuss in the paper another approach based on data-driven execution to efficiently tackle this challenging load balancing problem. The core idea consists of breaking the most time-consuming stages of the FMMs into smaller tasks. The algorithm can then be represented as a directed acyclic graph where nodes represent tasks and edges represent dependencies among them. The execution of the algorithm is performed by asynchronously scheduling the tasks using the queueing and runtime for kernels runtime environment, in a way such that data dependencies are not violated for numerical correctness purposes. This asynchronous scheduling results in an out-of-order execution. The performance results of the data-driven FMM execution outperform the previous strategy and show linear speedup on a quad-socket quad-core Intel Xeon system.Copyright © 2013 John Wiley & Sons, Ltd.", "In this paper, we explore data-driven execution of the adaptive fast multipole method by asynchronously scheduling available computational tasks using Cilk, C++11 standard thread and future libraries, the High Performance ParalleX (HPX-5) library, and OpenMP tasks. By comparing these implementations using various input data sets, this paper examines the runtime system's capability to spawn new task, the capacity of the tasks that can be managed, the performance impact between eager and lazy thread creation for new task, and the effectiveness of the task scheduler and its ability to recognize the critical path of the underlying algorithm.", "High performance fast multipole method is crucial for the numerical simulation of many physical problems. In a previous study, we have shown that task-based fast multipole method provides the flexibility required to process a wide spectrum of particle distributions efficiently on multicore architectures. In this paper, we now show how such an approach can be extended to fully exploit heterogeneous platforms. For that, we design highly tuned graphics processing unit GPU versions of the two dominant operators P2P and M2L as well as a scheduling strategy that dynamically decides which proportion of subsequent tasks is processed on regular CPU cores and on GPU accelerators. We assess our method with the StarPU runtime system for executing the resulting task flow on an Intel X5650 Nehalem multicore processor possibly enhanced with one, two, or three Nvidia Fermi M2070 or M2090 GPUs Santa Clara, CA, USA. A detailed experimental study on two 30 million particle distributions a cube and an ellipsoid shows that the resulting software consistently achieves high performance across architectures. Copyright © 2015 John Wiley & Sons, Ltd.", "This paper presents an optimized CPU--GPU hybrid implementation and a GPU performance model for the kernel-independent fast multipole method (FMM). We implement an optimized kernel-independent FMM for GPUs, and combine it with our previous CPU implementation to create a hybrid CPU+GPU FMM kernel. When compared to another highly optimized GPU implementation, our implementation achieves as much as a 1.9× speedup. We then extend our previous lower bound analyses of FMM for CPUs to include GPUs. This yields a model for predicting the execution times of the different phases of FMM. Using this information, we estimate the execution times of a set of static hybrid schedules on a given system, which allows us to automatically choose the schedule that yields the best performance. In the best case, we achieve a speedup of 1.5× compared to our GPU-only implementation, despite the large difference in computational powers of CPUs and GPUs. We comment on one consequence of having such performance models, which is to enable speculative predictions about FMM scalability on future systems." ], "cite_N": [ "@cite_30", "@cite_35", "@cite_21", "@cite_42", "@cite_52", "@cite_3", "@cite_15", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2121893797", "77989884", "", "2545968212", "", "2032401773", "1731033096", "2540315316", "2206856102", "2051744938" ] }
From Piz Daint to the Stars: Simulation of Stellar Mergers using High-Level Abstractions
Astrophysical simulations are among the classical drivers for exascale computing. They require multiple scales of physics and cover vast scales in space and time. Even the next generation of high-performance computing (HPC) systems will be insufficient to solve more than a fraction of the many conceivable scenarios. However, new HPC systems come not only with ever larger processor counts, but increasingly complex, diverse, and heterogeneous hardware. Evolving manycore architectures and GPUs are combined with multicore systems. This raises challenges especially for large-scale HPC simulation codes and requires going beyond traditional programming models. High-level abstractions are required to ensure that codes are portable and can be run on current HPC systems without the need to rewrite large portions of the code. We consider the simulation of stellar phenomena based on the simulation framework Octo-Tiger. In particular, we study the simulation of time-evolving stellar mergers (Fig. 1). The study of binary star evolution from the onset of mass transfer to merger can provide fundamental insight into the underlying physics. In 2008, this phenomenon was observed with photometric data, when the contact binary V1309 Scorpii merged to form a luminous red novae [58]. The vision of our work is to model this event with simulations on HPC systems. Comparing the results of our simulations with the observations will enable us to validate the model and to improve our understanding of the physical processes involved. Octo-Tiger is an HPC application and relies on high-level abstractions, in particular, HPX and Vc. While HPX provides scheduling and scalability, both between nodes and within, Vc ensures portable vectorization across processorbased platforms. To make use of GPUs we use HPX's CUDA integration in this work. Previous work has demonstrated scalability on Cori, a Cray XC40 system installed at the National Energy Research Scientific Computing Center (NERSC) [27]. However, the underlying node-level performance was rather low, and they were only able to simulate for few time steps. Consequently, they had started to study node-level performance, achieving 408 GFLOPS on the 64 cores of the Intel Knights Landing manycore processor [45]. Using the same high-level abstractions as on multicore systems, this led to a speedup of 2 compared to a 24-core Intel Skylake-SP platform. In this work, we make use of the same CPU level abstraction library Vc [31] for SIMD vector parallelism as in the previous study, but extend Octo-Tiger to support GPU-based HPC machines. We show how the critical node-level bottleneck, the fast multipole method (FMM) kernels, can be mapped to GPUs. Our approach utilizes GPUs as co-processors, running up to 128 FMM kernels on each one simultaneously. This was implemented using CUDA streams and uses HPX's futurization approach for lock-free, low-overhead scheduling. We demonstrate the performance-portability of Octo-Tiger for a set of GPU and processor-based HPC nodes. To scale more efficiently to thousands of nodes, we have integrated a new libfabric communication backend into HPX where it can be used transparently by Octo-Tiger -the first large scientific application to use the new network layer. The libfabric implementation extensively uses one-sided communication to reduce the overhead compared to a standard two-sided MPI-based backend. To demonstrate both our node-level GPU capabilities as well as our improved scalability with libfabric, we show results for full-scale runs on Piz Daint running the real-world stellar merger scenario of V1309 Scorpii for a few time-steps. Piz Daint is a Cray XC40/XC50 equipped with NVIDIA's P100 GPUs at the Swiss National Supercomputing Centre (CSCS). For our full system runs we used up to 5400 out of 5704 nodes. This is the first time an HPX application was run on a full system of a GPU-accelerated supercomputer. In Sec. 2 we briefly discuss related approaches. We describe the stellar scenario in more detail in Sec. 3, the important parts of the overall software framework and the high-level abstractions they provide in Sec. 4. In turn, Sec. 5 shows the main contributions of this work, describing both the new libfabric parcelport and the way we utilize GPUs to accelerate the execution of critical kernels. In Sec. 6.1, we present our node-level performance results for NVIDIA GPUs, Intel Xeons and an Intel Xeon Phi platform. Section 6.2 describes our scaling results, showing that we are able to scale with both an MPI communication backend and a libfabric communication backend of HPX. We show that the use of libfabric strongly improves performance at scale. SCENARIO: STELLAR MERGERS In September 2008, the contact binary, V1309 Scorpii, merged to form a luminous red novae (LRN) [58]. The Optical Gravitational Lensing Experiment (OGLE) observed this binary prior to its merger, and six years of data show its period decreasing. When the merger occurred, the system increased in brightness by a factor of about 5000. Mason et al. [39] observed the outburst spectroscopically, confirming it as a LRN. This was the first observed stellar merger of a contact binary with photometric data available prior to its merger. Possible progenitor systems for V1309 Scorpii, consisting initially of zero-age main sequence stars with unequal masses in a relatively narrow range, were proposed by Stepien in [50]. As the heavier of the two stars first begins to expand into a red giant, it transfers mass to its lower mass companion, forming a common envelope. The binary's orbit shrinks due to friction, and the mass transfer slows down as the companion becomes the heavier of the two stars but continues to grow at the expense of the first star. Eventually this star also expands, with both stars now touching each other forming a contact binary. Stepien et. al. sampled the space of physically possible initial masses, finding that initial primary masses of between 1.1 ⊙ and 1.3 ⊙ and initial secondary masses between 0.5 ⊙ and 0.9 ⊙ produced results consistent with observations prior to merger. The evolution described above results in an approximately 1.52 − 1.54 ⊙ primary and a 0.16 − 0.17 ⊙ secondary with helium cores and Sun-like atmospheres. It is theorized that the merger itself was due to the Darwin instability. When the total spin angular momentum of a binary system exceeds one third of its orbital angular momentum, the system can no longer maintain tidal synchronization. This results in a rapid tidal disruption and merger. Octo-Tiger uses its Self-Consistent Field module [20,23] to produce an initial model for V1309 to simulate this last phase of dynamical evolution. The stars are tidally synchronized, and the stars have a common atmosphere. The system parameters are chosen such that the spin angular momentum just barely exceeds one third of the orbital angular momentum. Octo-Tiger begins the simulation just as the Darwin instability sets in (Fig. 1). SOFTWARE FRAMEWORK 4.1 HPX We have developed the Octo-Tiger application framework [52] in ISO C++11 using HPX [24-26, 28, 29, 51]. HPX is a C++ standard library for distributed and parallel programming built on top of an Asynchronous Many Task (AMT) runtime system. Such AMT runtimes may provide a means for helping programming models to fully exploit available parallelism on complex emerging HPC architectures. The HPX methodology described here includes the following essential components: • An ISO C++ standard conforming API that enables waitfree asynchronous parallel programming, including futures, channels, and other primitives for asynchronous execution. • An Active Global Address Space (AGAS) that supports load balancing via object migration and enables exposing a uniform API for local and remote execution. • An active-message networking layer that enables running functions close to the objects they operate on. This also implicitly overlaps computation and communication. • A work-stealing lightweight task scheduler that enables finer-grained parallelization and synchronization and automatic load balancing across all local compute resources. • APEX, an in-situ profiling and adaptive tuning framework. The design features of HPX allow application developers to naturally use key parallelization and optimization techniques, such as overlapping communication and computation, decentralizing control flow, oversubscribing execution resources, and sending work to data instead of data to work. As a result Octo-Tiger achieves exceptionally high system utilization and exposes very good weak-and strong scaling behaviour. HPX exposes an asynchronous, standards conforming programming model enabling Futurization, with which developers can express complex dataflow execution trees that generate billions of HPX tasks that are scheduled to execute only when their dependencies are satisfied [27]. Also, Futurization enables automatic parallelization and load-balancing to emerge. Additionally, HPX provides a performance counter and adaptive tuning framework that allows users to access performance data, such as core utilization, task overheads, and network throughput; these diagnostic tools were instrumental in scaling Octo-Tiger to the full machine. This paper demonstrates the viability of the HPX programming model at scale using Octo-Tiger, a portable and standards conforming application. Octo-Tiger fully embraces the C++ Parallel Programming Model, including additional constructs that are incrementally being adopted into the ISO C++ Standard. The programming model views the entire supercomputer as a single C++ abstract machine. A set of tasks operates on a set of C++ objects distributed across the system. These objects interact via asynchronous function calls; a function call to an object on a remote node is relayed as an active message to that node. A powerful and composable primitive, the future object represents and manages asynchronous execution and dataflow. A crucial property of this model is the semantic and syntactic equivalence of local and remote operations. This provides a unified approach to intra-and inter-node parallelism based on proven generic algorithms and data structures available in today's ISO C++ Standard. The programming model is intuitive and enables performance portability across a broad spectrum of increasingly diverse HPC hardware. Octo-Tiger Octo-Tiger simulates the evolution of mass density, momentum, and energy of interacting binary stellar systems from the start of mass transfer to merger. It also evolves five passive scalars. It is a three-dimensional finite-volume code with Newtonian gravity that simulates binary star systems as selfgravitating compressible inviscid fluids. To simulate these fluids we need three core components: (1) a hydrodynamics solver, (2) a gravity solver that calculates the gravitational field produced by the fluid distribution, and (3) a solver to generate an initial configuration of the star system. The passive scalars, expressed in units of mass density, are evolved using the same continuity equation that describes the evolution of the mass density. They do not influence the flow itself, but are rather used to track various fluid fractions as the system evolves. In the case of V1309, these scalars are initialized to the mass density of the accretor core, the accretor envelope, the donor core, the donor envelope, and the common atmosphere between the two stars. The passive scalars are useful in post-processing. For instance, to compute the temperature we require the mass and energy densities as well as the number density. The latter is not evolved in the simulation, but can be computed from the passive scalars assuming a composition for each fraction (e.g. helium for both cores, and a solar composition for the remaining fractions). The balance of angular momentum plays an important role in the orbital evolution of binary systems. Three-dimensional astrophysical fluid codes with self-gravity do not typically conserve angular momentum. The magnitude of this violation is dependent on the particular problem and resolution. Previous works have found relative violations as high as 10 −3 per orbit [16,38,41]. This error, accumulated over several dozen orbits, becomes significant enough to influence the fate of the system. Octo-Tiger conserves both linear and angular momenta to machine precision. In the fluid solver, this is accomplished using a technique described by [18], while the gravity solver uses our own extension to the FMM. Octo-Tiger's main datastructure is a rotating Cartesian grid with adaptive mesh refinement (AMR). It is based on an adaptive octree structure. Each node is an 3 sub-grid (with = 8 for all runs in this paper) containing the evolved variables, and can be further refined into eight child nodes. Each octree node is implemented as an HPX component. These octree nodes are distributed onto the compute nodes using a space filling curve. For further information about implementational details we refer to [45] and [37]. The first solver that operates on this octree is a finite volume hydrodynamics solver. Octo-Tiger uses the central advection scheme of [32]. The piece-wise parabolic method (PPM) [13] is used to compute the thermodynamic variables at cell faces. A method detailed by [38] is used to conserve total energy in its interaction with the gravitational field. This technique involves applying the advection scheme to the sum of gas kinetic, internal, and potential energies, resulting in conservation of the total energy. Numerical precision of internal energy densities can suffer greatly in high mach flows, where the kinetic energy dwarfs the gas internal energy. We use the dual-energy formalism of [10] to overcome this issue: We evolve both the gas total energy as well as the entropy. The internal energy is then computed from one or the other depending on the mach number (entropy for high mach flows and total gas energy for low mach ones). The angular momentum technique described by [18] is applied to the PPM reconstruction. It ads a degree of freedom to the reconstruction of velocities on cell faces by allowing for the addition of a spatially constant angular velocity component to the linear velocities. This component is determined by evolving three additional variables corresponding to the spin angular momentum for a given cell. The gravitational field solver is based on the FMM. Octo-Tiger is unique in conserving both linear and angular momentum simultaneously and at scale using modifications to the original FMM algorithm [36,37]. Finally, we assemble the initial scenario using the Self-Consistent Field technique alongside the FMM solver. Octo-Tiger can produce initial models for binary systems that are in contact, semi-detached, or detached [37]. Calculated only once, the computational demands of this solver will be negligible for full-size runs. We used a test suite of four verification tests, recommended by Tasker et al. [56] for self-gravitating astrophysical codes, to verify the correctness of our results. The first two are purely hydrodynamic tests: the Sod shock tube and the Sedov-Taylor blast wave. Both have analytical solutions which we can use for comparisons. The third and fourth test are a globular star cluster in equilibrium and one in motion. In each case, the equilibrium structure should be retained. Because Octo-Tiger is intended to simulate individual stars self-consistently, we have substituted a single star in equilibrium at rest for the third test and a single star in equilibrium in motion for the fourth test. The FMM hotspot The most compute-intensive task is the calculation of the gravitational field using the FMM, since this has to be done for each of the fluid-solver time-steps. Note that our FMM variant differs from approaches such as the implementation used in [61]. While being distributed and GPU-capable, their FMM is operating upon particles. Our FMM variant operates on the grid cells directly since each grid cell has a density value which determines its mass, and thus its gravitational influence on other cells. We further differ from other (cellbased) FMM variants used for computing gravitational fields by conserving not only linear momentum, but also angular momentum, down to machine precision using the changes outlined in [36]. Due to its computational intensity, we will take a closer look at the FMM and its kernels in this section. The FMM algorithm consists of three steps. First, it computes the multipole moments and the center-of-masses of the individual cells. This information is then used to calculate Taylor-series expansions coefficients in the second and third steps. These coefficients can in turn be used to approximate the gravitational potential in a cell, which can then be used by the hydrodynamics solver [37]. The first of the three FMM steps requires a bottom up traversal of the octree datastructure. The fluid density of the cells of the highest level is the starting point. The multipole moments of every other cell are then calculated using the multipole moments of its child cells. We can additionally compute the center of mass for each refined cell. While this step includes a tree-traversal, it is not very compute intensive. In the second FMM step (same-level), we use the multipole moments and the center-of-masses to calculate how much the gravity in each cell is influenced by its neighboring cells on the same octree level. How many cells are considered as "neighboring" is determined by the so-called opening criteria [37]. However, their number is constant on each level. The result of these interactions is a Taylor series expansion of interactions. This is the most compute-intensive part. In the third FMM step, the gravitational influence of cells outside of the opening criteria is computed, and the octree is traversed top-down. The respective Taylor series expansion of the parent node is passed to the child nodes and accumulated. In the first and third step we calculate interactions between either child nodes and their respective parents or vice-versa. Since a refined node only has 8 children, the number of these interactions is limited. In the second step, the number of same-level interactions per cell that need to be calculated is much higher. For our choice of parameters, each cell interacts with 1074 of its close neighbors, assuming they exist. The second FMM step (same-level interactions) is by far the most compute-intense part. Originally, it required about 70% of the total scenario runtime and was thus the core focus of previous optimizations. Originally, lookup of close neighbor cells was performed using an interaction list, and data was stored in an array-of-struct format. In order to improve cache-efficiency and vector-unit usage, we changed it to a stencil-based approach and are now utilizing a structof-arrays datastructure. Compared to the old interactionlist approach, this led to a speedup of the total application runtime between 1.90 and 2.22 on AVX512 CPUs and between 1.23 and 1.35 on AVX2 CPUs [15]. Furthermore, we achieved node-level scaling as well as performance portability between different CPU architectures through the usage of Vc [15,45]. After these optimizations, the FMM required only about 40% (depending on the hardware) of the total scenario runtime with its compute kernels reaching a significant fraction of peak on multiple platforms as we will demonstrate in Sect. 6.1. Due to the presence of AMR, there are four different cases of same-level interactions: 1) multipole-monopole interactions between cells of a refined octree node (multipoles) and cells of a non-refined octree node (monopoles); 2) multipolemultipole interactions; 3) monopole-monopole interactions; and 4) monopole-multipole interactions. This yields four kernels per octree-node. Their input data are the current node's sub-grid as well as all sub-grids of all neighboring nodes as a halo (ghost layer). The kernels then compute all interactions of a certain type and add the result to the Taylor coefficients of the respective cells in the sub-grid. We were able to combine the multipole-multipole and the multipole-monopole kernels into a single kernel, yielding three compute kernels in our implementation. As the monopole-multipole kernel consumes only about 2% of the total runtime, we ignore it in the following. The remaining two compute kernels, 1)/2) and 3), are the central hotspots of the application. Each kernel launch applies a 1074 element stencil for each cell of the octree's sub-grid. As we have 3 = 512 cells per sub-grid, this results in 549 888 interactions per kernel launch. Depending on the interaction type, each of those interactions requires a different number of floating point operations to be executed. For monopole-monopole interactions we execute 12 floating point operations per interaction, and for multipole-multipole/monopole interaction 455 floating point operations. More information about the kernels can be found in [45]; however, the number of floating operations per monopole interaction differs slightly there as we combined the two monopole-X kernels there. IMPROVING OCTO-TIGER USING HIGH-LEVEL ABSTRACTIONS Running an irregular, adaptive application like Octo-Tiger on a heterogeneous supercomputer like Piz Daint presents challenges: The pockets of parallelism contained in each octree node must be run efficiently on the GPU, despite the relatively small number of cells in each sub-grid. The GPU implementation should not degrade parallel efficiency through overheads such as work aggregation, CPU/GPU synchronization, or blocked CPU threads. Furthermore, we expect the implementation to behave as before, with the exception of faster GPU execution of tasks. In this section, we first present our implementation and integration of FMM GPU kernels into the task flow using HPX CUDA futures as a high-level abstraction. We then introduce the libfabric parcelport and show how this new communication layer improves scalability of Octo-Tiger by taking advantage of HPX's communications abstractions. Asynchronous Many Tasks with GPUs As our FMM implementation is stencil-based and uses a struct-of-arrays datastructure, the FMM kernels introduced in Section 4.3 are very amenable to GPU execution. Each kernel executes a 1074 element stencil on the 512 cells of the 8x8x8 sub-grid of an octree node, calculating the gravitational interactions of each cell with its 1074 neighbors. We parallelize over the cells of the sub-grid, launching kernels with 8 blocks, each containing 64 CUDA threads which execute the whole stencil for each cell. The stencil-based computation of the interactions between two cells is done the same way as on the CPU. In fact, since we use Vc datatypes for vectorization on the CPU, we can simply instance the same function template (that computes the interaction between two cells) with scalar datatypes and call it within the GPU kernel. GPU-specific optimizations are done in a wrapper around this cell-to-cell method and the loop over the stencil elements. This wrapper includes the usual CUDA optimizations such as shared and constant memory usage. Thus far we have used standard CUDA to create rather normal kernels for the FMM implementation. However, these kernels alone suffer from two major issues: As it stands, the execution of a GPU kernel would block the CPU thread launching it, no other task would be scheduled or executed whilst it runs. As Octo-Tiger relies on having thousands of tasks available simultaneously for scalability, this presents a problem. The second issue is obvious when looking at the size of the workgroups and the number of blocks for each GPU kernel launch mentioned above. The GPU kernels do not expose enough parallelism to fully utilize a GPU such as the NVIDIA P100 using only small workgroups and 8 blocks per kernel. To solve these two issues, we provide an HPX abstraction for CUDA streams. For any CUDA stream event we create an HPX future that becomes ready once operations in the stream (up to the point of the event/future's creation) are finished. Internally, this is created using a CUDA callback function that sets the future ready [24]. This seemingly simple construct allows us to fully integrate CUDA kernels within the HPX runtime, as it provides a synchronization point for the CUDA stream that is compatible with the HPX scheduler. It yields multiple immediate advantages: • Seamless and automatic execution of kernels and overlapping of CPU/GPU tasks; • overlapping of computation and communication as some HPX tasks are related to the communication with other compute nodes; and • CPU/GPU data synchronization -completed GPU kernels triggering the scheduler, signal access to buffers that can be used/copied. Furthermore, the integration is mostly non-invasive since a CUDA kernel invocation now equates to a function call returning a future. The rest of the kernel implementation and the (asynchronous) buffer handling uses the normal CUDA API, thus the GPU kernels themselves can still be hand-optimized. Nonetheless, this integration alone does not solve the second issue: The kernels are too fine-grained to fully utilize the GPUs. Conventional approaches to solve this include work aggregation and execution models where CUDA kernels can call other kernels and coalesce execution. Unfortunately, work aggregation schemes, as described in [42], do not fit our task-based approach. Individual kernels should finish as soon as possible in order to trigger dependent ones, such as communication with other nodes or the third FMM step; delays in launching these may lead to a degradation of parallel efficiency. Recursively calling other GPU kernels as in [59] poses a similar problem as we would traverse the octree on the GPU, making communication calls more difficult. Furthermore, we would like to run code on the appropriate device; tree traversals on the CPU, and processing of the octree kernels on the GPU. Here, however, we can exploit the fact that the execution of GPU kernels is just another task to the HPX runtime system: We launch a multitude of different GPU kernels concurrently on different streams with each CPU thread handling multiple CUDA streams, and thus multiple GPU kernels concurrently. Normally, this would present problems for CPU/GPU synchronization as GPU results are needed for other CPU tasks. But the continuation passing style of program execution in HPX, chaining dependent tasks onto futures, makes this trivial. When a GPU kernel output (or data transfer) that has not yet finished is needed for a task, the runtime assigns different work to the CPU and schedules the dependent tasks when the GPU future becomes ready. When the number of concurrent GPU tasks running matches the total number of available CUDA streams (usually 128 per GPU), new kernels are instead executed as CPU tasks until a CUDA stream becomes empty again. In summary, the octree is traversed on the CPU, with tasks spawned asynchronously for kernels on the GPU or CPU returning futures for each. Any tasks that require results from previous ones are attached as continuations to the futures. The CPU is continuously supplied with new work (including communication tasks) as futures complete. Since all CPU threads may participate in traversal and steal work from each other, we keep the GPU busy by nature of the sheer number of concurrent GPU kernels submitted. Octo-Tiger is the first application to use HPX CUDA futures. It is in fact an ideal fit for this kind of GPU integration: Parallelization is possible only within individual timesteps of the application, and a production run simulation will require tens of thousands of them, making it is essential to maximize parallel efficiency (as well as proper GPU usage), particularly as each timestep might run for a fraction of a second on the whole machine overall. The fine-grained approach of GPU usage presented here fits these challenges perfectly. In Section 6 we show how this model performs. We run a real-world scenario for a few timesteps to both show that we achieve a significant fraction of GPU peak performance during the execution of the FMM, as well as scalability on the whole Piz Daint machine, each of the 5400 compute nodes using a NVIDIA P100 GPU. Thus, Octo-Tiger also serves as a proof as concept, showing that large, tree-based applications containing pockets of parallelism can efficiently run finegrained parallelism tasks on the GPU without compromising scalability with HPX. Active messages and libfabric parcelport The programming model of HPX does not rely on the user matching network sends and receives explicitly as one would do with MPI. Instead, active messages are used to transfer data and trigger a function on a remote node; we refer to the triggering of remote functions with bound arguments as actions and the messages containing the serialized data and remote function as parcels [7]. A halo exchange, for example, written using MPI involves a receive operation posted on one node and a matching send on another. With non-blocking MPI operations, the user may check for readiness of the received data at a convenient place in the code and then act appropriately. With blocking ones, the user must wait for the received data and can only continue as soon as it arrives. With HPX, the same halo exchange may be accomplished by creating a future for some data on the receiving end, and having the sending end trigger an action that sets the future ready with the contents of the parcel data. Since futures in HPX are the basic synchronization primitive for work, the user may attach a continuation to the receive data to start the next calculation that depends on it. The user does not therefore have to perform any test for readiness of the received data: When it arrives, the runtime will set the future and schedule whatever work depends upon it automatically. This combines the convenience of both a blocking receive to trigger work, with an asynchronous receive that allows the runtime to continue whilst waiting. The asynchronous send/receive abstraction in HPX has been extended with the concept of a channel that the receiving end may fetch futures from (for timesteps ahead if desired) and the sending end may push data into as it is generated. Channels are set up by the user similar to MPI communicators; however, the handles to channels are managed by AGAS (Sect. 4.1). Even when a grid cell is migrated from one node to another during operation, the runtime manages the updated destination address transparently, allowing the user code to send data to the relocated grid with minimal disruption. These abstractions greatly simplify user level code and allow performance improvements in the runtime to be propagated seamlessly to all places that use them. The default messaging layer in HPX is built on top of the asynchronous two-sided MPI API and uses Isend/Irecv within the parcel encoding and decoding steps of action transmission and execution. HPX is designed from the ground up to be multi-threaded, avoid locking/waiting, and instead suspend tasks and execute others as soon as any blocking activity takes place. Although MPI supports multi-threaded applications, it has its own internal progress/scheduling management and locking mechanisms that interfere with the smooth running of the HPX runtime. The scheduling in MPI is in turn built upon the network provider's asynchronous completion queue handling and multi-threaded support which may also use OS level locks that suspend threads (and thus impede HPX progress). The HPX parcel format is more complex than a simple MPI message, but the overheads of packing data can be kept to a minimum [7] by using remote memory access (RMA) for transfers. All user/packed data buffers larger than the eager message size threshold are encoded as pointers and exchanged between nodes using one-sided RMA put/get operations. Switching HPX to use the one-sided MPI RMA API is no solution as this involves memory registration/pinning that is passed through to the provider level API, causing additional (unwanted) synchronization between user code, MPI code, and the underlying network/fabric driver. Bypassing MPI and using the network API directly to improve performance was seen as a way of decreasing latency, improving memory management, simplifying the parcelport code, and better integrating the multi-threaded runtime with the communications layer. Libfabric was chosen as it has an ideal API that is supported on many platforms, including Cray machines via the GNI provider [46]. The purely asynchronous API of libfabric blends seamlessly with the asynchronous internals of HPX. Any task scheduling thread may poll for completions in libfabric and set futures to received data without any intervening layer. A one-to-one mapping of completion events to ready futures is possible for some actions, and dependencies for those futures can be immediately scheduled for execution. We expose pinned memory buffers for RMA to libfabric via allocators in the HPX runtime, so that internal data copying between user buffers (halos for example) and the network is minimized. When dealing with GPUs capable of multi TFlop performance, even delays of the order of microseconds in receiving data and subsequent task launches translates to a significant loss of compute capability. Note that with the HPX API it is trivial to reserve cores for thread pools dedicated to background processing of the network separate from normal task execution to further improve performance, but this has not yet been attempted with the Octo-Tiger code. Our libfabric parcelport uses only a small subset of the libfabric API but delivers very high performance as we demonstrate in Sect. 6 MPI. Similar gains could probably be made using the MPI RMA API, but this would require a much more complex implementation. It is a significant contribution of this work that we have demonstrated that an application may benefit from significant performance improvements in the runtime without changing a single line of the application code. This has been achieved utilizing abstractions for task management, scheduling, distribution, and messaging. It is generally true of any library that improvements in performance will produce corresponding improvements in code using it. But switching a large codebase to one-sided or asynchronous messaging is usually a major operation that involves redesigns of significant portions to handle synchronization between previously isolated (or sequential) sections. The unified futurized and asynchronous API of HPX provides a unique opportunity to take advantage of improvements at all levels of parallelism throughout a code as all tasks are naturally overlapped. And network bandwidth and latency improvements reduce waiting not only for remote data, but the effects of improved scheduling of all messages (synchronization of remote tasks as as well as direct data movement) directly impacts and improves on-node scheduling and thus benefits all tasks. RESULTS The initial model of our V1309 simulation includes a 1.54 ⊙ primary and a 0.17 ⊙ secondary. Each have helium cores and solar composition envelopes, and there is a common envelope surrounding both stars. The simulation domain is a cubic grid with edges 1.02 × 10 3 ⊙ long. This is about 160 times larger than the initial orbital separation, providing space for any mass ejected from the system. The sub-grids are 8 × 8 × 8 grid cells. The centers of mass of the components are 6.37 ⊙ apart. The grid is rotating about the z-axis with a period of 1.42 days, corresponding to the initial period of the binary. For the level 14 run, both stars are refined down to 12 levels, with the core of the accretor and donor refined to 13 and 14 levels respectively. The 15, 16, and 17 level runs are successively refined one more level in each refinement regime. At the finest level, each grid cell is 7.80 × 10 −3 ⊙ in each dimension for level 14, down to 9.750 × 10 −4 ⊙ for level 17. Although available compute time allowed us only to simulate a few time-steps for this work, this is exactly the production scenario we aim for. For all obtained results, the software dependencies in Table 1 were used to build Octo-Tiger (d6ad085) on the various platforms. FMM Node-Level Performance In the following, we will take a closer look at the performance of the FMM kernels, discussed in Sect. 4.3 and 5.1, on both GPUs and different CPU platforms. We will first explain how we made measurements and then discuss the results. 6.1.1 Measuring the Node-Level Performance. Measuring the node-level results for the FMM solver alone presents several challenges. Instead of a few large kernels, we are executing millions of small FMM kernels overall. Additionally, one FMM kernel alone will never utilize the complete device. On the CPU, each FMM kernel is executed by just one core. We cannot assume that the other cores will always be busy executing an FMM kernel as well. On the GPU, one kernel will utilize only up to 8 Streaming Multiprocessors (SM). The NVIDIA P100 GPU contains 56 of these SMs, each of which is analogue to a SIMD-enabled processor core. In order to see how well we utilize the given hardware with the FMM kernels, we focus not on the performance of a single kernel. We rather focus on the overall performance while computing the gravity during the GPU-accelerated FMM part of the code. In order to calculate both the GFLOP/s and the fraction of peak performance, we need to know the number of floating point operations executed while calculating the gravity, as well as the time required to do so. The first piece of information is easy to collect. Each FMM kernel always executes a constant number of floating point operations. We count the number of kernel launches in each HPX thread and accumulate this number until the end of the simulation. We can further record whether a kernel was executed on the CPU or the GPU. Due to the interleaving of kernels and the general lack of synchronization points between the gravity solver and the fluid solver, the amount of runtime spent in the FMM solver is more difficult to obtain. To measure it, we run the simulation multiple times; first, on the CPU without any GPUs. We collect profiling data with perf to get an estimation of the fraction of the runtime spent within the FMM kernels and thus the gravity solver. With this information we calculate the fraction of the runtime spent outside the gravity solver. Afterwards we repeat the run -without perf -and multiply its total runtime with the earlier obtained runtime fractions to get both the time spent in the gravity solver and the time spent in other methods. With this information, as well as the counters for the FMM kernel launches, we can now calculate the GFLOP/s achieved by the CPU when executing the FMM kernels. To get the same information for the GPUs, we include them in a third run of the same simulation. Using the GPUs, only the runtime of the gravity solver will improve since the rest of the code does not benefit from them. Thus, by subtracting the runtime spent outside of the FMM kernels in the CPU-only run from the total runtime of the third run, we can estimate the overall runtime of the GPU-enabled FMM kernels and with that the GFLOP/s we achieve overall during their execution. For all results in this work, we employ the same V1309 scenario and double precision calculations. The level 14 octree discretization considered here will serve as the baseline for scaling runs. Results. The results of our node-level runs can be found in Tab. 2. Switching to a stencil-based approach for the FMM instead of the old interaction-lists, the fraction of time spent in the two main FMM kernels shrank considerably. On the Intel Xeon E5-2660 v3 with 20 cores, they now only make up 38% of the total runtime. On the Intel Xeon Phi 7210 this difference is even higher, with the FMM only making up 20% of the total runtime. This is most likely due to the fact that the other less optimized parts of Octo-Tiger make fewer use of the SIMD capabilites that the Xeon Phi offers and are thus running a lot slower. This reduces the overall fraction of the FMM runtime compared to the rest of the code. Nevertheless, we achieve a significant fraction of peak performance on all devices. On the CPU-side, the Xeon Phi 7210 achieves the most GFLOP/s within the FMM kernels. Since it lowers its frequency to 1.1 GHz during AVX-intensive calculations, the real achieved fraction of peak performance may be significantly higher than 17%. We have assumed the base (unthrottled) clock rate shown in the table for calculating the theoretical peak performance of the CPU devices. Other than running a specific Vc version that supports AVX512 on Xeon Phi, we did not adapt the code. However, we attain a reasonable fraction of peak performance on this difficult hardware. On the AVX2 CPUs we reach about 30%. We tested GPU performance of the FMM kernels in multiple hardware configurations; we used either 10 or 20 cores in combination with either one or two V100 GPUs. Using two V100 GPUs, an insufficient number of cores affects performance. With 20 cores and two GPUs we achieve 37% of the combined V100 peak performance. Reducing to 10 cores, the performance drops to 22% of the peak. Then, the GPUs get starved of work, since the 10 cores have a lot of tasks to work on and cannot launch enough kernels on the GPU. Simultaneously, when utilizing one V100 GPU managed by 10 cores, we achieve 32% of peak performance on the GPU. But using one V100 with 20 CPU cores, the performance decreases, achieving only 22% peak: The number of threads used to fill the CUDA streams of the GPU directly affects the performance. This effect can be explained by the way we handle CUDA streams. Each CPU thread manages a certain number of CUDA streams. When launching a kernel, a thread first checks whether all of the CUDA streams it manages are busy. If not, the kernel will be launched on the GPU using an idle stream. Otherwise, the kernel will be executed on the CPU by the current CPU worker thread. Executing an FMM kernel on the CPU takes significantly longer than on the GPU, as one CPU kernel will be executed on one core. In a CPU-only setting all cores are working on FMM kernels of different octree nodes. With 20 cores and one V100, the CPU threads first fill all 128 streams with 128 kernel launches. Launching the next kernels, the GPU has not finished yet, and the CPU threads start to work on FMM kernels themselves. This leads to starvation of the GPU for a short period of time, as the CPU threads are not launching more work on the GPU in the meantime. Having two V100 offsets the problem, as the cores are less likely to work on the FMM themselves: It is more likely that there is a free CUDA stream available. We analyzed the number of kernels launched on the GPU to provide further data on this. Using 20 cores and one V100 we launch 97.4995% of all multipole-multipole FMM kernels on the GPU. Using 10 cores and one V100 this number increases to 99.9997%. Considering that a CPU FMM execution on one core takes longer than on the GPU and that during this time no other GPU kernels are launched in the meantime, the small difference in percentage can cause a large performance impact. This is a current limitation of our implementation and will be addressed in the next version of Octo-Tiger: There is no reason not to launch multiple FMM kernels in one stream if there is no empty stream available. This would lead to 100% of the FMM kernels launched on the GPU independent of the CPU hardware. Table 4: Number of tree nodes (sub-grids) per level of refinement (LoR) and the memory usage of the corresponding level. Since Piz Daint is our target system, we also evaluated performance on one of its nodes, using 128 CUDA streams. For comparison, 99.5207% of all multipole-multipole FMM kernels were launched on the GPU. We achieve about 21% of peak performance on the GPU. In summary, we were able to demonstrate that the uncommon approach of launching many small kernels is a valid way to utilize the GPU. Scaling results All of the presented distributed scaling results were obtained on Piz Daint at the Swiss National Supercomputing Centre. Table 3 lists the hardware configuration of Piz Daint. For the scalability analysis of Octo-Tiger different levels of refinement of the V1309 scenario were run, as shown in Tab. 4. A level 13 restart file, which takes less than an hour to generate on an Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz, was used as the basis for all runs. For all levels the restart file for level 13 was read and refined to higher levels of resolution through conservative interpolation of the evolved variables. The number of nodes was increased in powers of two 1, 2, 4, . . . up to 4096 nodes with a maximum of 5400 which corresponds to the full system on Piz Daint. All runs utilized 12 CPU cores on each node, i.e. up to 64, 800 cores for the full-system run. The simulations started at level 14, the smallest that fits on a single Piz Daint node with respect to memory while still consisting of an acceptable number of sub-grids to expose sufficient parallelism. The number of nodes was increased by a power of two until the scaling saturated due to too little work per node. Higher refinement levels were then run on the largest overlapping node counts to produce the graph shown in Fig. 2, where the speedup is calculated with respect to the number of processed sub-grids per second on one node at level 14. The graph therefore shows a combination of weak scaling as the level of refinement increases and strong scaling for each refinement level as the node count increases. Weak scaling is clearly very good, with close to optimal improvements with successive refinement levels. Strong scaling tails off as the amount of sub-grids for each level becomes too small to generate sufficient work for all CPUs/GPUs. The performance difference between the number of sub-grids processed per second for the two parcelports increases with higher node counts and refinement level, a sure sign that communication is responsible for causing delays that prevent the processing cores from getting work done. Each increase in the refinement level can, due to AMR, increase the total number of grids by up to a factor of 8; see Tab. 4 for the actual values. This causes a near quadratic increase in the total number of halos exchanged. As the node count increases, the probability of a halo exchange increases linearly, and it is therefore no surprise that reduced communication latency leads to the large gains observed. The improvement in communication is due to all of the following changes: Network performance results • Explicit use of RMA for the transfer of halo buffers. • Lower latency on send and receive of all parcels and execution of RMA transfers. • Direct control of all memory copies for send/receive buffers between the HPX runtime and the libfabric driver. • Reduced overhead between receipt of a transfer/message completion event and subsequent setting of a ready future. • Thread-safe lock-free interface between the HPX scheduling loop and the libfabric API with polling for network progress/completions integrated into the HPX task scheduling loop. It is important to note that the timing results shown are for the core calculation steps that exchange halos, and the figures do not include regridding steps or I/O that also make heavy use of communication. Including them would further illustrate the effectiveness of the networking layer: Start-up timings of the main solver at refinement level 16 and 17 were in fact reduced by an order of magnitude using the libfabric parcelport, increasing the efficiency of refining the initial restart file of level 13 to the desired level of resolution. Note further that some data points at level 16 and 17 for large runs are missing as the start-up time consumed the limited node hours available to their execution. The communication speedups shown have not separately considered the effects of thread pools and the scheduling of network progress on the rates of injection or the handling of messages. When running on Piz Daint with 12 worker threads executing tasks, any thread might need to send data across the network. In general, the injection of data into send queues does not cause problems unless many threads are attempting to do so concurrently and the send queues are full. The receipt of data, however, must be performed by polling of completion queues. This can only take place in-between the execution of other tasks. Thus, if all cores are busy with work, no polling is done, and if no work is available, all cores compete for access to the network. The effects can be observed in Fig. 3 where the libfabric parcelport causes a slight reduction in performance for lower node counts. With GPUs doing most of the work, CPU cores can be reserved for network processing, and the job of polling can be restricted to a subset of cores that have no other (longer running) tasks to execute. HPX supports partitioning of a compute node into separate thread pools with different responsibilities; the effects of this will be investigated further to see whether reducing contention between cores helps to restore the lost performance. CONCLUSIONS AND FUTURE WORK As the core contributions of this paper, we have demonstrated node-level and distributed performance of Octo-Tiger, an astrophysics code simulating a stellar binary merger. We have shown excellent scaling up to the full system on Piz Daint and improved network performance based on the libfabric library. The high-level abstractions we employ, in particular HPX and Vc, demonstrate how portability in heterogeneous HPC systems is possible. This is the first time an HPX application was run on a full system of a GPU-accelerated supercomputer. This work also has several implications for parallel programming for future architectures. The asynchronous many-task runtime systems like HPX are a powerful, viable, and promising addition to the current landscape of parallel programming models. We show that it is not only possible to utilize these emerging tools to perform on the largest scales, but also that it might even be desirable to leverage the latency hiding, finer-grained parallelism and natural support for heterogeneity that the asynchronous many-task model exposes. In particular, we have significantly increased node-level performance of the originally most compute hungry part of Octo-Tiger, the gravitational solver. Our optimizations have demonstrated excellent node-level performance on different HPC compute nodes with heterogeneous hardware, including multi-GPU systems and KNL. We have achieved up to 37% of the peak performance on two NVIDIA V100 GPUs, and 17% of peak on a KNL system. To achieve high node-level performance for the full simulation, we will also port the remaining part, the hydrodynamics solver, to GPUs. The distributed scaling results have been obtained within a development project on Piz Daint and thus with severely limited compute time. The excellent results presented in this paper have already built the foundation for a production proposal that will enable us to target full-resolution simulations with impact on physics. Despite the significant performance improvement replacing MPI with libfabric, there are more networking improvements under development that have not been incorporated into Octo-Tiger yet. This includes the use of user-controlled RMA buffers that allow the user to instruct the runtime that certain memory regions will be used repeatedly for communication (and thus amortize memory pinning/registration costs). Integration of such features into the channel abstraction may prove to reduce latencies further and is an area we will explore. With respect to the astrophysical application, we have already developed a radiation transport module for Octo-Tiger based on the two moment approach adapted by [48]. This will be required to simulate the V1309 merger with high accuracy. What remains is to fully debug and verify this module and to port the implementation to GPUs. Finally, our full-scale simulations will be able to predict the outcome of mergers that have not yet happened: These simulations will useful for comparison with future "red nova" contact-binary merger events. Two contact-binary systems have been suggested as future mergers, KIC 9832227 [40,49] and TY Pup [47]. Other candidate systems will be discovered with the new all-sky surveys such as the Zwicky Transient Facility (ZTF) and the Large Synoptic Survey Telescope (LSST).
8,475
1908.02711
2965409261
Adversarial training has been recently employed for realizing structured semantic segmentation, in which the aim is to preserve higher-level scene structural consistencies in dense predictions. However, as we show, value-based discrimination between the predictions from the segmentation network and ground-truth annotations can hinder the training process from learning to improve structural qualities as well as disabling the network from properly expressing uncertainties. In this paper, we rethink adversarial training for semantic segmentation and propose to formulate the fake real discrimination framework with a correct incorrect training objective. More specifically, we replace the discriminator with a "gambler" network that learns to spot and distribute its budget in areas where the predictions are clearly wrong, while the segmenter network tries to leave no clear clues for the gambler where to bet. Empirical evaluation on two road-scene semantic segmentation tasks shows that not only does the proposed method re-enable expressing uncertainties, it also improves pixel-wise and structure-based metrics.
. Adversarial training schemes have been extensively employed in the literature to impose structural consistencies for semantic segmentation @cite_30 @cite_14 @cite_33 @cite_2 @cite_23 @cite_1 @cite_8 @cite_48 @cite_24 @cite_20 @cite_5 . @cite_14 incorporate a discriminator network trained to distinguish the real labels and network-produced predictions. Involving the segmenter in a minimax game with the discriminator motivates the network to bridge the gap between the two distributions and consequently having higher-level consistencies in predicted labels.
{ "abstract": [ "In this paper, we propose perceptual adversarial networks (PANs) for image-to-image transformations. Different from existing application driven algorithms, PAN provides a generic framework of learning to map from input images to desired images (Fig. 1), such as a rainy image to its de-rained counterpart, object edges to photos, and semantic labels to a scenes image. The proposed PAN consists of two feed-forward convolutional neural networks: the image transformation network T and the discriminative network D. Besides the generative adversarial loss widely used in GANs, we propose the perceptual adversarial loss, which undergoes an adversarial training process between the image transformation network T and the hidden layers of the discriminative network D. The hidden layers and the output of the discriminative network D are upgraded to constantly and automatically discover the discrepancy between the transformed image and the corresponding ground truth, while the image transformation network T is trained to minimize the discrepancy explored by the discriminative network D. Through integrating the generative adversarial loss and the perceptual adversarial loss, D and T can be trained alternately to solve image-to-image transformation tasks. Experiments evaluated on several image-to-image transformation tasks (e.g., image deraining and image inpainting) demonstrate the effectiveness of the proposed PAN and its advantages over many existing works.", "Adversarial training has been shown to produce state of the art results for generative image modeling. In this paper we propose an adversarial training approach to train semantic segmentation models. We train a convolutional semantic segmentation network along with an adversarial network that discriminates segmentation maps coming either from the ground truth or from the segmentation network. The motivation for our approach is that it can detect and correct higher-order inconsistencies between ground truth segmentation maps and the ones produced by the segmentation net. Our experiments show that our adversarial training approach leads to improved accuracy on the Stanford Background and PASCAL VOC 2012 datasets.", "", "Convolutional neural networks (CNNs) have been applied to various automatic image segmentation tasks in medical image analysis, including brain MRI segmentation. Generative adversarial networks have recently gained popularity because of their power in generating images that are difficult to distinguish from real images.", "Automatic liver segmentation in 3D medical images is essential in many clinical applications, such as pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. However, it is still a very challenging task due to the complex background, fuzzy boundary, and various appearance of liver. In this paper, we propose an automatic and efficient algorithm to segment liver from 3D CT volumes. A deep image-to-image network (DI2IN) is first deployed to generate the liver segmentation, employing a convolutional encoder-decoder architecture combined with multi-level feature concatenation and deep supervision. Then an adversarial network is utilized during training process to discriminate the output of DI2IN from ground truth, which further boosts the performance of DI2IN. The proposed method is trained on an annotated dataset of 1000 CT volumes with various different scanning protocols (e.g., contrast and non-contrast, various resolution and position) and large variations in populations (e.g., ages and pathology). Our approach outperforms the state-of-the-art solutions in terms of segmentation accuracy and computing efficiency.", "Semantic segmentation constitutes an integral part of medical image analyses for which breakthroughs in the field of deep learning were of high relevance. The large number of trainable parameters of deep neural networks however renders them inherently data hungry, a characteristic that heavily challenges the medical imaging community. Though interestingly, with the de facto standard training of fully convolutional networks (FCNs) for semantic segmentation being agnostic towards the structure' of the predicted label maps, valuable complementary information about the global quality of the segmentation lies idle. In order to tap into this potential, we propose utilizing an adversarial network which discriminates between expert and generated annotations in order to train FCNs for semantic segmentation. Because the adversary constitutes a learned parametrization of what makes a good segmentation at a global level, we hypothesize that the method holds particular advantages for segmentation tasks on complex structured, small datasets. This holds true in our experiments: We learn to segment aggressive prostate cancer utilizing MRI images of 152 patients and show that the proposed scheme is superior over the de facto standard in terms of the detection sensitivity and the dice-score for aggressive prostate cancer. The achieved relative gains are shown to be particularly pronounced in the small dataset limit.", "Recently, the convolutional neural network (CNN) has been successfully applied to the task of brain tumor segmentation. However, the effectiveness of a CNN-based method is limited by the small receptive field, and the segmentation results don’t perform well in the spatial contiguity. Therefore, many attempts have been made to strengthen the spatial contiguity of the network output. In this paper, we proposed an adversarial training approach to train the CNN network. A discriminator network is trained along with a generator network which produces the synthetic segmentation results. The discriminator network is encouraged to discriminate the synthetic labels from the ground truth labels. Adversarial adjustments provided by the discriminator network are fed back to the generator network to help reduce the differences between the synthetic labels and the ground truth labels and reinforce the spatial contiguity with high-order loss terms. The presented method is evaluated on the Brats2017 training dataset. The experiment results demonstrate that the presented method could enhance the spatial contiguity of the segmentation results and improve the segmentation accuracy.", "Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.", "Organ segmentation in chest X-rays using convolutional neural networks is disclosed. One embodiment provides a method to train a convolutional segmentation network with chest X-ray images to generate pixel-level predictions of target classes. Another embodiment will also train a critic network with an input mask, wherein the input mask is one of a segmentation network mask and a ground truth annotation, and outputting a probability that the input mask is the ground truth annotation instead of the prediction by the segmentation network, and to provide the probability output by the critic network to the segmentation network to guide the segmentation network to generate masks more consistent with learned higher-order structures.", "We introduce scGAN, a novel extension of conditional Generative Adversarial Networks (GAN) tailored for the challenging problem of shadow detection in images. Previous methods for shadow detection focus on learning the local appearance of shadow regions, while using limited local context reasoning in the form of pairwise potentials in a Conditional Random Field. In contrast, the proposed adversarial approach is able to model higher level relationships and global scene characteristics. We train a shadow detector that corresponds to the generator of a conditional GAN, and augment its shadow accuracy by combining the typical GAN loss with a data loss term. Due to the unbalanced distribution of the shadow labels, we use weighted cross entropy. With the standard GAN architecture, properly setting the weight for the cross entropy would require training multiple GANs, a computationally expensive grid procedure. In scGAN, we introduce an additional sensitivity parameter w to the generator. The proposed approach effectively parameterizes the loss of the trained detector. The resulting shadow detector is a single network that can generate shadow maps corresponding to different sensitivity levels, obviating the need for multiple models and a costly training procedure. We evaluate our method on the large-scale SBU and UCF shadow datasets, and observe up to 17 error reduction with respect to the previous state-of-the-art method.", "In this work, we segment spheroids with different sizes, shapes, and illumination conditions from bright-field microscopy images. To segment the spheroids we create a novel multiscale deep adversarial network with different deep feature extraction layers at different scales. We show that linearly increasing the adversarial loss contribution results in a stable segmentation algorithm for our dataset. We qualitatively and quantitatively compare the performance of our deep adversarial network with two other networks without adversarial losses. We show that our deep adversarial network performs better than the other two networks at segmenting the spheroids from our 2D bright-field microscopy images." ], "cite_N": [ "@cite_30", "@cite_14", "@cite_33", "@cite_8", "@cite_48", "@cite_1", "@cite_24", "@cite_23", "@cite_2", "@cite_5", "@cite_20" ], "mid": [ "2731516742", "2950040358", "", "2735429996", "2964137552", "2593853042", "2786129249", "2962722483", "2603901205", "2777654136", "2766611250" ] }
I Bet You Are Wrong: Gambling Adversarial Networks for Structured Semantic Segmentation
In the past years, deep neural networks have obtained substantial success in various visual recognition tasks including semantic segmentation [12,15]. Despite the success of the frequently used (fully) convolutional neural networks [31] on semantic segmentation, they lack a built-in mechanism to enforce global structural qualities. For instance, if the task is to detect a single longest line among several linear structures in the image, then a CNN is prob-ably not able to properly handle such global consistency and will likely give responses on other candidate structures. This stems from the fact that even though close-by pixels share a fair amount of receptive field, there is no designated mechanism to explicitly condition the prediction at a specific location on the predictions made at other related (closeby or far) locations, when training with a pixel-level loss. To better preserve structural quality in semantic segmentation, several methods incorporate graphical models such as conditional random fields (CRF) [26,49,42], or use specific topology targeted engineered loss terms [1,38]. More recently, adversarial training [16] schemes are being explored [32,21,13], where a discriminator network learns to distinguish the distributions of the network-provided dense predictions (fake) and ground-truth labels (real), which directly encourages better inter-pixel consistencies in a learnable fashion. However, as we will show, the visual clues that the discriminator uses to distinguish the fake and real distributions are not always high-level geometrical properties. For instance, a discriminator might be able to leverage the prediction values to contrast the fuzzy fake predictions with the crisp zero/one real prediction values to achieve an almost perfect discrimination accuracy. Such value-based discrimination results in two undesirable consequences: 1) The segmentation network ("segmenter") is forced to push its predictions toward zeros and ones and pretend to be confident to mimic such a low-level property of real annotations. This prevents the network from expressing uncertainties. 2) In practice, the softmax probability vectors can not get to exact zeros/ones that requires infinitely large logits. This leaves a permanent possibility for the discriminator to scrutinize the small -but still remaining-value gap between the two distributions, making it needless to learn the more complicated geometrical Figure 1: From left to right: sample image from Cityscapes [6], corresponding ground-truth image, predictions from U-Net trained with cross-entropy loss, betting map from the gambler network, predictions from the gambling adversarial nets. Notice e.g. spotted and resolved artefact in predictions from the cross-entropy trained U-Net in bottom right and right side of the road. Best visible zoomed-in on a screen. discrepancies. This hinders such adversarial training procedures to reach their full potential in learning the scene structure. The value-based discrimination inherently stems from the fake/real discrimination scheme employed in adversarial structured semantic segmentation. Therefore, we aim to study a surrogate adversarial training scheme that still models the higher level prediction consistencies, but is not trained to directly contrast the real and fake distributions. In particular, we replace the discriminator with a "gambler" network, that given the predictions of the segmenter and a limited budget, learns to spot and invest in areas where the predictions of the network are likely wrong. Put another way, we reformulate the fake/real discrimination problem into a correct/incorrect distinction task. This prevents the segmenter network from faking certainty, since a wrong confident prediction caught by the gambler, highly penalizes the segmenter. See Figures 1 and 2 for getting an overview. Following are the main contributions of the paper: • We propose gambling adversarial networks as a novel adversarial training scheme for structured semantic segmentation. • We show that the proposed method resolves the usual adversarial semantic segmentation training issue with faking confidence. • We demonstrate that this reformulation in the adversarial training improves the semantic segmentation quality over the baselines, both in pixel-wise and structural metrics on two semantic segmentation datasets, namely the Cityscapes [6] and Camvid [2] datasets. Method In this section, the proposed method, gambling adversarial networks, is described. First, we present the usual adversarial training formulation for structured semantic segmentation and discuss the potential issues with it. Thereafter, we describe gambling adversarial networks and its reformulation of the former. Conventional adversarial training In the usual adversarial training formulation, the discriminator learns to discriminate the ground-truth (real) from the predictions provided by the network (fake). By involving the segmenter in a minimax game, it is challenged to improve its predictions to provide realistic-looking predictions to fool the discriminator [32]. In semantic segmentation, such an adversarial training framework is often employed with the aim to improve the higher-level structural qualities, such as connectivity, inter-pixel (local and nonlocal) consistencies and smoothness. The minimax game is set-up by forming the following loss terms for the discriminator and segmenter: L d (x, y; θ s , θ d ) = L bce (d(x, s(x; θ s ); θ d ), 0) + L bce (d(x, y; θ d ), 1),(1) where x and y are the input image and the corresponding label-map, s(x; θ s ) is the segmenter's mapping of the input image x to a dense segmentation map parameterized by θ s , d(x, y; θ d ) represents the discriminator operating on segmentations y, conditioned on input image x and the binary cross-entropy is defined as L bce (ŷ, y) = −(y logŷ + (1 − y) log(1 −ŷ)), whereŷ and y are the prediction and label respectively. Typically, the loss function for the segmenter is a combination of low-level (pixel-wise) and high-level (adversarial) loss terms [21,32]: L s (x, y; θ s , θ d ) = L ce (s(x; θ s ), y) + λL bce (d(x, s(x; θ s ); θ d ), 1), (2) where λ is the importance weighting of the adversarial loss, being the recommended non-saturating reformulation of the original minimax loss term to prevent vanishing gradients [16,11]. The pixel-level cross-entropy loss L ce optimizes all the pixels independently of each other by mini- Recently, the usual adversarial training for structured semantic segmentation was suggested to be modified [13,45,43] by replacing the binary cross-entropy loss as the adversarial loss term for the segmenter, with a fake/real paired embedding difference loss, where the embeddings are extracted from the adversarially trained discriminator. To be more specific, the adversarial loss term in Equation (2) is replaced by the following embedding loss: mizing L ce (ŷ, y) = − 1 wh w,h i,j c k y i,j,k logŷ i,j,L emb (x,ŷ, y; θ d ) = d e (x,ŷ; θ d ) − d e (x, y; θ d ) 2 ,(3) where the function d e (x, y; θ) represents the extracted features from a particular layer in the discriminator. As shown in the EL-GAN method, this could significantly stabilize training [13]. Ideally, the discriminator's decisions are purely based on the structural differences between the real and the fake predictions. However, in semantic segmentation, it is often possible for the discriminator to perfectly distinguish the labels from the predictions based on the values. The output of the segmenter is a softmax vector per pixel, which assigns a probability to every class that ranges between zero and one. In contrast, the values in the ground-truth are either zeros or ones due to the one-hot encoding. Such value-based discrepancy can yield unsatisfactory gradient feedback, since the segmenter might be forced to mimic the one-hot encoding of the ground-truth instead of the global structures. Additionally, the value-based discrimination is a never-ending problem since realizing exact ones and zeros requires infinite large logits, however, in practise, the segmenter always leaves a small value-based gap that can be exploited by the discriminator. Another undesired outcome is the loss of ability to express uncertainties, since all the predictions will converge towards a one-hot representation to bridge the value-based gap between the one-hot labels and probabilistic predictions. Gambling Adversarial Networks To prevent the adversarial network from utilizing the value-based discrepancy, we propose gambling adversar-ial networks, which focuses solely on improving the structural inconsistencies. Instead of the usual real/fake adversarial training task, we propose to modify the task to learn to distinguish incorrect predictions given the whole prediction map. Different from a discriminator, the critic network (gambler) does not observe the ground-truth labels, but solely the RGB-image in combination with the prediction of the segmentation network (segmenter). Given a limited investment budget, the gambler predicts an image-sized betting map, where high bets indicate pixels that are likely incorrectly classified, given the contextual prediction clues around it. Since the gambler receives the entire prediction, structurally ill-formed predictions, such as non-smoothness, disconnectivities and shape-anomalies are clear visual clues for profitable investments for the gambler. An overview of gambling adversarial networks is provided in Figure 2. Similar to conventional adversarial training, the gambler and segmenter play a minimax game; The gambler maximizes the expected weighted pixel-wise cross-entropy where the weights are determined by its betting map, while the segmenter attempts to improve its predictions such that the gambler does not have clues where to bet: L g (x, y; θ s , θ g ) = − 1 wh w,h i,j g(x, s(x; θ s ); θ g ) i,j L ce (s(x; θ s ) i,j , y i,j ),(4) where g(x, s(x; θ s ); θ g ) i,j is the amount of budget the gambler invests on position (i, j). The segmenter network minimizes the opposite: L s (x, y; θ s , θ g ) = L ce (s(x; θ s ), y) − L g (x, y; θ s , θ g ). (5) Similar to conventional adversarial training, the segmentation network optimizes a combination of loss terms: a perpixel cross-entropy loss and an inter-pixel adversarial loss. It should be noted that the gambler can easily maximize this loss by betting infinite amounts on all the pixels. Therefore, it is necessary to limit the budget the gambler can spend. We accommodate this by turning the betting map into a smoothed probability distribution: g(x,ŷ; θ g ) i,j = g σ (x,ŷ; θ g ) i,j + β w,h k,l g σ (x,ŷ; θ g ) k,l + β ,(6) where β is a smoothing factor and g σ (x,ŷ; θ g ) i,j represents the sigmoid output of the gambler network for pixel with the indices i, j. Smoothing the betting map regularizes the model to spread its bets over multiple pixels instead of focusing on a single location. The adversarial loss causes two different gradient streams for the segmentation network, as shown in Figure 2, where the solid black and dashed red arrows indicate the forward pass and backward gradient flows respectively. In the backward pass, the gradient flow A pointing directly towards the prediction provides pixel-wise feedback independent of the other pixel predictions. Meanwhile, the gradient flow B, going through the gambler network, provides feedback reflecting inter-pixel and structural consistencies. Experimental results In this section, we discuss the datasets and metrics for the evaluation of gambling adversarial networks. Thereafter, we describe the different network architectures for the segmenter and gambler networks and provide details for training. Finally, we report the results of our experiments. Experimental setup Datasets. We conduct experiments on two different urban road-scene semantic segmentation datasets, but hypothesize that the method is generic and can be applied to any segmentation dataset. Cityscapes. The Cityscapes [6] dataset contains 2975 training images, 500 validation images and 1525 test images with a resolution of 2048 × 1024 consisting of 19 different classes, such as cars, persons and road signs. For preprocessing of the data, we down-scale the images to 1024 × 512, perform random flipping and take random crops of 512 × 512 for training. Furthermore, we perform intensity jittering on the RGB-images. Camvid. The urban scene Camvid [2] dataset consists of 429 training images, 101 validation images and 171 test images with a resolution of 960 × 720. We apply the same data augmentations as described above, except that we do not perform any down-scaling. Metrics. In addition to the mean intersection over union (IoU), we also quantify the structural consistency of the segmentation maps. Firstly, we compute the BF-score [7], which measures whether the contours of objects in the predictions match with the contours of the ground-truth. A point on the contour line is a match if the distance between the ground-truth and prediction lies within a toleration distance τ , which we set to 0.75 % of the image diagonal as suggested in [7]. Furthermore, we utilize a modified Hausdorff distance to quantitatively measure the structural correctness [9]. We slightly modify the original Hausdorff distance, to prevent it from being overwhelmed by outliers: d H (X, Y ) = 1 2 1 |X| x∈X inf y∈Y d(x, y), 1 |Y | y∈Y inf x∈X d(x, y) ,(7) where X and Y are the contours of the predictions and labels from a particular class and d(x, y) is the Euclidean distance. We average the score over all the classes that are present in the prediction and the ground-truth. Network architectures. For comparison, we experiment with two well-known baseline segmentation network architectures. Firstly, a U-Net [39] based architecture as implemented in Pix2Pix [21], which is an encoder-decoder structure with skip connections. The encoder consists of nine down-sampling blocks containing a convolutional layer with batch normalization and ReLu. The decoder blocks are the same, except that the convolutions are replaced by transposed convolutions. Furthermore, we conduct experiments with PSPNet [48], which utilizes a pyramid pooling module to capture more contextual information. Similar to [48], we utilize an ImageNet pre-trained ResNet-101 [17] as backbone. For the gambler network, we utilize the same networks as the segmentation network. When training with the U-Net based architecture, the gambler network is identical except that it contains only six down-sampling blocks. For the PSPNet, the architecture of the gambler and segmenter are identical. For the baseline adversarial methods, we utilize the PatchGAN discriminator from Pix2Pix [21]. Training. For training the models, we utilize the Adam optimizer [24] with a linearly decaying learning rate over time. Similar to the conventional adversarial training, the gambler and segmenter are trained in an alternating fashion where the gambler is frozen when updating the segmenter and vice versa. Furthermore, we learned that as opposed to conventional adversarial training, our network does not required separate pre-training and in general, we observe that the training is less sensitive to hyperparameters. Details of the hyperparameters can be found in the supplementary material. Results Confidence expression. As discussed before, valuebased discrimination encourages the segmentation network to mimic the one-hot vectors of the ground-truth, resulting in loss of ability to express uncertainty. We hypothesize that reformulating the fake/real discrimination in the adversarial training to a correct/incorrect distinction scheme will mitigate the issue. To verify this, the mean and standard deviation of the maximum class-likelihood value in every softmax vector for each pixel is tracked on the validation set over different training epochs and the results are depicted in Figure 3. We conducted this experiment with the U-Net based architecture on Cityscapes, but we observed the same phenomena with the other segmentation network and on the other dataset. One can observe that for both the standard adversarial training and EL-GAN that discriminate the real from the fake predictions, the predictions are converging towards one, with barely any standard deviation. For the gambling adversarial networks, the uncertainty of the predictions is well-preserved. In Table 1, the average mean maximum over the last 10 epochs is shown, which confirms that the gambling adversarial networks maintain the ability to express the uncertainty similar to the cross-entropy model, while the existing adversarial methods attempt to converge to a one-hot vector. U-Net based segmenter. First, we compare the baselines with the gambling adversarial networks on the Cityscapes [6] validation set with the U-Net based architecture. The results in Table 2 show the gambling adversarial networks perform better on the pixel-wise metric (IoU), but also on the structural metrics. In Table 3 most all classes. Moreover, similar to the IoU, we observe the most significant improvements on the more fine-grained classes, such as rider and pole. In Figure 4, one qualitative sample is depicted, in the supplementary material more samples are provided. The adversarial methods resolve some of the artifacts, such as the odd pattern in the car on the left. Moreover, the boundaries of the pedestrians on the sidewalk become more precise. We also provide an example betting map predicted by the gambler, given the predictions from the baseline model trained with cross-entropy in combination with the RGBimage. Note that the gambler bets on the badly shaped building in the prediction and responds to the artifacts in the car. PSPNet segmenter. We conduct experiments with the PSPNet [48] segmenter on the Camvid [2] and Cityscapes [6] datasets. In Table 5, the results are shown on the Cityscapes validation set. Again, the gambling adversarial networks perform better than the existing methods, on both of the structure-based metrics as well as the mean IoU. In Figure 5, a qualitative sample is shown, more can be found in the supplementary material. The gambling adversarial networks provides more details to the traffic lights. Also, the structure of the sidewalk shows significant improvements over the predictions from the model trained with standard segmentation loss. The quantitative results on the Camvid [2] test set are shown in Table 6. The gambling adversarial networks achieve the highest score on the structure-based metrics, but the standard adversarial training [32] performs best on the IoU. In the supplementary material, we provide qualitative results for the Camvid [2] test set and extra images for the aforementioned experiments. Discussion Correct/incorrect versus real/fake discrimination. We reformulated the adversarial real/fake discrimination task into training a critic network to learn to spot the likely incorrect predictions. As shown in Figure 3, the discrimination of real and fake causes undesired gradient feedback, since all the softmax vectors converge to a one-hot vector. We Table 4: BF-score [7] per class on the validation set of Cityscapes [6] with U-Net based architecture [21] as segmentation network empirically showed that this behavior is caused by a valuebased discrimination of the adversarial network. Moreover, modifying the adversarial task to correct/incorrect discrimination solves several problems. First of all, the reason to apply adversarial training to semantic segmentation is to improve on the high-level structures. However, the value-based discriminator is not only providing feedback based on the visual difference between the predictions and the labels, but also an undesirable value-based feedback. Moreover, updating the weights in a network with the constraint that the output must be a one-hot vector complicates training unnecessarily. Finally, the value-based discriminator Figure 5: Qualitative results on Cityscapes [6] with PSP-Net [48] as segmentation network hinders the network from properly disclosing uncertainty. Both the structured prediction and expressing uncertainty can be of great value for semantic segmentation, e.g. in autonomous driving and medical imaging applications. However, changing the adversarial task to discriminating the correct from the incorrect predictions resolves the aforementioned issues. The segmentation network is not forced to imitate the one-hot vectors, which preserves the uncertainty in the predictions and simplifies the training. Although we still notice that the gambler sometimes utilizes the prediction values by betting on pixels where the segmenter is uncertain, we also obtain improvements on the structure-based metrics compared to the existing adversarial methods. Gambling adversarial networks vs. focal loss The adversarial loss in gambling adversarial networks resembles the focal loss [30], since both methods up-weight the harder samples that contain more useful information for the update. The focal loss is defined as: L f oc (y,ŷ, p t ) = −(1 − p t ) γ y logŷ, where p t is the probability of the correct class and γ is a focusing factor, which indicates how much the easier samples are down-weighted. The advantage of the focal loss is that the ground-truth is exploited to choose the weights, however, the downside is that the focal loss might be over-pronouncing the ambiguous or incorrectly labeled pixels. The adversarial loss in gambling adversarial networks learns the weighting map, which can mitigate the noise effect. Moreover, the adversarial loss generates an extra flow of gradients (flow B), as observable in Figure 2. Gradient stream A provides information to the segmentation network independent of other pixel predictions similar to the focal loss, whereas gradient stream B provides gradients reflecting structural qualities, which is lacking in case of the focal loss. Insights into betting maps Inspecting the betting maps (see for instance Figure 4), we observe that some of the bets correspond to the class borders, especially the ones that seemingly do not match the visual evidence in the underlying image or the expected shape of the object. We should note that even though there are chances that the groundtruth labels on the borders are different from the predictions, blindly betting on all the borders is not even close to the optimal policy. The clear bad structures in the predictions, e.g. the weird prediction of rider inside the car or the badly formed wall on the left side, are still more rewarding investments that are also being spotted by the gambler. Conclusion In this paper, we studied a novel reformulation of adversarial training for semantic segmentation, in which we replace the discriminator with a gambler network that learns to use the inter-pixel consistency clues to spot the wrong predictions. We showed that involving the segmenter in a minimax game with such a gambler results in notable improvements in structural and pixel-wise metrics, as measured on two road-scene semantic segmentation datasets. Supplementary Material In this section, we provide the hyperparameters for gambling adversarial networks and extra qualitative results for the different experiments. Hyperparameters In the following paragraphs, the hyperparameters for the experiments in the results section are described. U-Net based architecture on Cityscapes. Training details for the experiment on Cityscapes [6] with U-Net based architecture [21]. We trained the segmenter and gambler in alternating fashion of 200 and 400 iterations respectively over 300 epochs with a batch size of 4. The betting maps are calculated with a smoothing factor β of 0.02. Details for the segmenter and gambler are as following: • Segmenter: optimizer: Adam [24], learning rate: 1e-4, beta1: 0.5, beta2: 0.99, adversarial coefficient λ: 1.0, weight decay: 5e-4. • Gambler: optimizer: Adam [24], learning rate 1e-4, beta1: 0.5, beta2: 0.99, weight decay: 5e-4. PSPnet on Cityscapes. Training details for the experiment on Cityscapes [6] with PSPNet [48]. We trained the segmenter and gambler in alternating fashion of 800 and 800 iterations respectively over 200 epochs with a batch size of 3. The betting maps are calculated with a smoothing factor β of 0.02. Details for the segmenter and gambler are as following: • Segmenter: optimizer: Adam [24], learning rate: 2.5e-5, beta1: 0.5, beta2: 0.99, adversarial coefficient λ: 1.0, weight decay: 5e-4. • Gambler: optimizer: Adam [24], learning rate 2.5e-5, beta1: 0.5, beta2: 0.99, weight decay: 5e-4. PSPNet on Camvid. Training details for the experiment on Camvid [2] with PSPNet [48]. We trained the segmenter and gambler in alternating fashion of 100 and 200 iterations respectively over 100 epochs with a batch size of 2. The betting maps are calculated with a smoothing factor β of 0.02. Details for the segmenter and gambler are as following: • Segmenter: optimizer: Adam [24], learning rate: 5e-5, beta1: 0.5, beta2: 0.99, adversarial coefficient λ: 0.5, weight decay: 5e-4. • Gambler: optimizer: Adam [24], learning rate 5e-5, beta1: 0.5, beta2: 0.99, weight decay: 5e-4. Qualitative results In Figures 6 and 7, extra qualitative results are depicted for the experiments on the Cityscapes [6] validation set for the U-Net based architecture [21] and PSPNet [48] as segmentation network. In Figure 8, some samples are shown on the test set of Camvid [2] with PSPNet as segmentation network. Figure 6: From left to right: RGB-image, ground-truth, CE, betting-map, focal loss, CE + adv, EL-GAN, gambling nets. The betting map is a prediction with as input the RGB image and the CE prediction. Results are for the Cityscapes [6] validation set with the U-Net based architecture [21]. Best visible zoomed-in on a screen. Figure 7: From left to right: RGB-image, ground-truth, CE, betting-map, focal loss, CE + adv, EL-GAN, gambling nets. The betting map is a prediction with as input the RGB image and the CE prediction. Results are for the Cityscapes [6] validation set with PSPNet [48]. Best visible zoomed-in on a screen. Figure 8: From left to right: RGB-image, ground-truth, CE, betting-map, focal loss, CE + adv, EL-GAN, gambling nets. The betting map is a prediction with as input the RGB image and the CE prediction. Results are for the Camvid [6] test set with PSPNet [48]. Best visible zoomed-in on a screen.
4,106
1908.02711
2965409261
Adversarial training has been recently employed for realizing structured semantic segmentation, in which the aim is to preserve higher-level scene structural consistencies in dense predictions. However, as we show, value-based discrimination between the predictions from the segmentation network and ground-truth annotations can hinder the training process from learning to improve structural qualities as well as disabling the network from properly expressing uncertainties. In this paper, we rethink adversarial training for semantic segmentation and propose to formulate the fake real discrimination framework with a correct incorrect training objective. More specifically, we replace the discriminator with a "gambler" network that learns to spot and distribute its budget in areas where the predictions are clearly wrong, while the segmenter network tries to leave no clear clues for the gambler where to bet. Empirical evaluation on two road-scene semantic segmentation tasks shows that not only does the proposed method re-enable expressing uncertainties, it also improves pixel-wise and structure-based metrics.
@cite_14 also discuss the value-based discrimination issue, which they attempt to alleviate by feeding the discriminator with a Cartesian product of the prediction maps and the input image channels. However, their followed strategy resulted in no improvements as reported. This can be attributed to remaining value-based evidence based on values distribution granularity. For instance, a very tiny response to a first-layer edge detector, in this case, can already signify a fake data sample.
{ "abstract": [ "Adversarial training has been shown to produce state of the art results for generative image modeling. In this paper we propose an adversarial training approach to train semantic segmentation models. We train a convolutional semantic segmentation network along with an adversarial network that discriminates segmentation maps coming either from the ground truth or from the segmentation network. The motivation for our approach is that it can detect and correct higher-order inconsistencies between ground truth segmentation maps and the ones produced by the segmentation net. Our experiments show that our adversarial training approach leads to improved accuracy on the Stanford Background and PASCAL VOC 2012 datasets." ], "cite_N": [ "@cite_14" ], "mid": [ "2950040358" ] }
I Bet You Are Wrong: Gambling Adversarial Networks for Structured Semantic Segmentation
In the past years, deep neural networks have obtained substantial success in various visual recognition tasks including semantic segmentation [12,15]. Despite the success of the frequently used (fully) convolutional neural networks [31] on semantic segmentation, they lack a built-in mechanism to enforce global structural qualities. For instance, if the task is to detect a single longest line among several linear structures in the image, then a CNN is prob-ably not able to properly handle such global consistency and will likely give responses on other candidate structures. This stems from the fact that even though close-by pixels share a fair amount of receptive field, there is no designated mechanism to explicitly condition the prediction at a specific location on the predictions made at other related (closeby or far) locations, when training with a pixel-level loss. To better preserve structural quality in semantic segmentation, several methods incorporate graphical models such as conditional random fields (CRF) [26,49,42], or use specific topology targeted engineered loss terms [1,38]. More recently, adversarial training [16] schemes are being explored [32,21,13], where a discriminator network learns to distinguish the distributions of the network-provided dense predictions (fake) and ground-truth labels (real), which directly encourages better inter-pixel consistencies in a learnable fashion. However, as we will show, the visual clues that the discriminator uses to distinguish the fake and real distributions are not always high-level geometrical properties. For instance, a discriminator might be able to leverage the prediction values to contrast the fuzzy fake predictions with the crisp zero/one real prediction values to achieve an almost perfect discrimination accuracy. Such value-based discrimination results in two undesirable consequences: 1) The segmentation network ("segmenter") is forced to push its predictions toward zeros and ones and pretend to be confident to mimic such a low-level property of real annotations. This prevents the network from expressing uncertainties. 2) In practice, the softmax probability vectors can not get to exact zeros/ones that requires infinitely large logits. This leaves a permanent possibility for the discriminator to scrutinize the small -but still remaining-value gap between the two distributions, making it needless to learn the more complicated geometrical Figure 1: From left to right: sample image from Cityscapes [6], corresponding ground-truth image, predictions from U-Net trained with cross-entropy loss, betting map from the gambler network, predictions from the gambling adversarial nets. Notice e.g. spotted and resolved artefact in predictions from the cross-entropy trained U-Net in bottom right and right side of the road. Best visible zoomed-in on a screen. discrepancies. This hinders such adversarial training procedures to reach their full potential in learning the scene structure. The value-based discrimination inherently stems from the fake/real discrimination scheme employed in adversarial structured semantic segmentation. Therefore, we aim to study a surrogate adversarial training scheme that still models the higher level prediction consistencies, but is not trained to directly contrast the real and fake distributions. In particular, we replace the discriminator with a "gambler" network, that given the predictions of the segmenter and a limited budget, learns to spot and invest in areas where the predictions of the network are likely wrong. Put another way, we reformulate the fake/real discrimination problem into a correct/incorrect distinction task. This prevents the segmenter network from faking certainty, since a wrong confident prediction caught by the gambler, highly penalizes the segmenter. See Figures 1 and 2 for getting an overview. Following are the main contributions of the paper: • We propose gambling adversarial networks as a novel adversarial training scheme for structured semantic segmentation. • We show that the proposed method resolves the usual adversarial semantic segmentation training issue with faking confidence. • We demonstrate that this reformulation in the adversarial training improves the semantic segmentation quality over the baselines, both in pixel-wise and structural metrics on two semantic segmentation datasets, namely the Cityscapes [6] and Camvid [2] datasets. Method In this section, the proposed method, gambling adversarial networks, is described. First, we present the usual adversarial training formulation for structured semantic segmentation and discuss the potential issues with it. Thereafter, we describe gambling adversarial networks and its reformulation of the former. Conventional adversarial training In the usual adversarial training formulation, the discriminator learns to discriminate the ground-truth (real) from the predictions provided by the network (fake). By involving the segmenter in a minimax game, it is challenged to improve its predictions to provide realistic-looking predictions to fool the discriminator [32]. In semantic segmentation, such an adversarial training framework is often employed with the aim to improve the higher-level structural qualities, such as connectivity, inter-pixel (local and nonlocal) consistencies and smoothness. The minimax game is set-up by forming the following loss terms for the discriminator and segmenter: L d (x, y; θ s , θ d ) = L bce (d(x, s(x; θ s ); θ d ), 0) + L bce (d(x, y; θ d ), 1),(1) where x and y are the input image and the corresponding label-map, s(x; θ s ) is the segmenter's mapping of the input image x to a dense segmentation map parameterized by θ s , d(x, y; θ d ) represents the discriminator operating on segmentations y, conditioned on input image x and the binary cross-entropy is defined as L bce (ŷ, y) = −(y logŷ + (1 − y) log(1 −ŷ)), whereŷ and y are the prediction and label respectively. Typically, the loss function for the segmenter is a combination of low-level (pixel-wise) and high-level (adversarial) loss terms [21,32]: L s (x, y; θ s , θ d ) = L ce (s(x; θ s ), y) + λL bce (d(x, s(x; θ s ); θ d ), 1), (2) where λ is the importance weighting of the adversarial loss, being the recommended non-saturating reformulation of the original minimax loss term to prevent vanishing gradients [16,11]. The pixel-level cross-entropy loss L ce optimizes all the pixels independently of each other by mini- Recently, the usual adversarial training for structured semantic segmentation was suggested to be modified [13,45,43] by replacing the binary cross-entropy loss as the adversarial loss term for the segmenter, with a fake/real paired embedding difference loss, where the embeddings are extracted from the adversarially trained discriminator. To be more specific, the adversarial loss term in Equation (2) is replaced by the following embedding loss: mizing L ce (ŷ, y) = − 1 wh w,h i,j c k y i,j,k logŷ i,j,L emb (x,ŷ, y; θ d ) = d e (x,ŷ; θ d ) − d e (x, y; θ d ) 2 ,(3) where the function d e (x, y; θ) represents the extracted features from a particular layer in the discriminator. As shown in the EL-GAN method, this could significantly stabilize training [13]. Ideally, the discriminator's decisions are purely based on the structural differences between the real and the fake predictions. However, in semantic segmentation, it is often possible for the discriminator to perfectly distinguish the labels from the predictions based on the values. The output of the segmenter is a softmax vector per pixel, which assigns a probability to every class that ranges between zero and one. In contrast, the values in the ground-truth are either zeros or ones due to the one-hot encoding. Such value-based discrepancy can yield unsatisfactory gradient feedback, since the segmenter might be forced to mimic the one-hot encoding of the ground-truth instead of the global structures. Additionally, the value-based discrimination is a never-ending problem since realizing exact ones and zeros requires infinite large logits, however, in practise, the segmenter always leaves a small value-based gap that can be exploited by the discriminator. Another undesired outcome is the loss of ability to express uncertainties, since all the predictions will converge towards a one-hot representation to bridge the value-based gap between the one-hot labels and probabilistic predictions. Gambling Adversarial Networks To prevent the adversarial network from utilizing the value-based discrepancy, we propose gambling adversar-ial networks, which focuses solely on improving the structural inconsistencies. Instead of the usual real/fake adversarial training task, we propose to modify the task to learn to distinguish incorrect predictions given the whole prediction map. Different from a discriminator, the critic network (gambler) does not observe the ground-truth labels, but solely the RGB-image in combination with the prediction of the segmentation network (segmenter). Given a limited investment budget, the gambler predicts an image-sized betting map, where high bets indicate pixels that are likely incorrectly classified, given the contextual prediction clues around it. Since the gambler receives the entire prediction, structurally ill-formed predictions, such as non-smoothness, disconnectivities and shape-anomalies are clear visual clues for profitable investments for the gambler. An overview of gambling adversarial networks is provided in Figure 2. Similar to conventional adversarial training, the gambler and segmenter play a minimax game; The gambler maximizes the expected weighted pixel-wise cross-entropy where the weights are determined by its betting map, while the segmenter attempts to improve its predictions such that the gambler does not have clues where to bet: L g (x, y; θ s , θ g ) = − 1 wh w,h i,j g(x, s(x; θ s ); θ g ) i,j L ce (s(x; θ s ) i,j , y i,j ),(4) where g(x, s(x; θ s ); θ g ) i,j is the amount of budget the gambler invests on position (i, j). The segmenter network minimizes the opposite: L s (x, y; θ s , θ g ) = L ce (s(x; θ s ), y) − L g (x, y; θ s , θ g ). (5) Similar to conventional adversarial training, the segmentation network optimizes a combination of loss terms: a perpixel cross-entropy loss and an inter-pixel adversarial loss. It should be noted that the gambler can easily maximize this loss by betting infinite amounts on all the pixels. Therefore, it is necessary to limit the budget the gambler can spend. We accommodate this by turning the betting map into a smoothed probability distribution: g(x,ŷ; θ g ) i,j = g σ (x,ŷ; θ g ) i,j + β w,h k,l g σ (x,ŷ; θ g ) k,l + β ,(6) where β is a smoothing factor and g σ (x,ŷ; θ g ) i,j represents the sigmoid output of the gambler network for pixel with the indices i, j. Smoothing the betting map regularizes the model to spread its bets over multiple pixels instead of focusing on a single location. The adversarial loss causes two different gradient streams for the segmentation network, as shown in Figure 2, where the solid black and dashed red arrows indicate the forward pass and backward gradient flows respectively. In the backward pass, the gradient flow A pointing directly towards the prediction provides pixel-wise feedback independent of the other pixel predictions. Meanwhile, the gradient flow B, going through the gambler network, provides feedback reflecting inter-pixel and structural consistencies. Experimental results In this section, we discuss the datasets and metrics for the evaluation of gambling adversarial networks. Thereafter, we describe the different network architectures for the segmenter and gambler networks and provide details for training. Finally, we report the results of our experiments. Experimental setup Datasets. We conduct experiments on two different urban road-scene semantic segmentation datasets, but hypothesize that the method is generic and can be applied to any segmentation dataset. Cityscapes. The Cityscapes [6] dataset contains 2975 training images, 500 validation images and 1525 test images with a resolution of 2048 × 1024 consisting of 19 different classes, such as cars, persons and road signs. For preprocessing of the data, we down-scale the images to 1024 × 512, perform random flipping and take random crops of 512 × 512 for training. Furthermore, we perform intensity jittering on the RGB-images. Camvid. The urban scene Camvid [2] dataset consists of 429 training images, 101 validation images and 171 test images with a resolution of 960 × 720. We apply the same data augmentations as described above, except that we do not perform any down-scaling. Metrics. In addition to the mean intersection over union (IoU), we also quantify the structural consistency of the segmentation maps. Firstly, we compute the BF-score [7], which measures whether the contours of objects in the predictions match with the contours of the ground-truth. A point on the contour line is a match if the distance between the ground-truth and prediction lies within a toleration distance τ , which we set to 0.75 % of the image diagonal as suggested in [7]. Furthermore, we utilize a modified Hausdorff distance to quantitatively measure the structural correctness [9]. We slightly modify the original Hausdorff distance, to prevent it from being overwhelmed by outliers: d H (X, Y ) = 1 2 1 |X| x∈X inf y∈Y d(x, y), 1 |Y | y∈Y inf x∈X d(x, y) ,(7) where X and Y are the contours of the predictions and labels from a particular class and d(x, y) is the Euclidean distance. We average the score over all the classes that are present in the prediction and the ground-truth. Network architectures. For comparison, we experiment with two well-known baseline segmentation network architectures. Firstly, a U-Net [39] based architecture as implemented in Pix2Pix [21], which is an encoder-decoder structure with skip connections. The encoder consists of nine down-sampling blocks containing a convolutional layer with batch normalization and ReLu. The decoder blocks are the same, except that the convolutions are replaced by transposed convolutions. Furthermore, we conduct experiments with PSPNet [48], which utilizes a pyramid pooling module to capture more contextual information. Similar to [48], we utilize an ImageNet pre-trained ResNet-101 [17] as backbone. For the gambler network, we utilize the same networks as the segmentation network. When training with the U-Net based architecture, the gambler network is identical except that it contains only six down-sampling blocks. For the PSPNet, the architecture of the gambler and segmenter are identical. For the baseline adversarial methods, we utilize the PatchGAN discriminator from Pix2Pix [21]. Training. For training the models, we utilize the Adam optimizer [24] with a linearly decaying learning rate over time. Similar to the conventional adversarial training, the gambler and segmenter are trained in an alternating fashion where the gambler is frozen when updating the segmenter and vice versa. Furthermore, we learned that as opposed to conventional adversarial training, our network does not required separate pre-training and in general, we observe that the training is less sensitive to hyperparameters. Details of the hyperparameters can be found in the supplementary material. Results Confidence expression. As discussed before, valuebased discrimination encourages the segmentation network to mimic the one-hot vectors of the ground-truth, resulting in loss of ability to express uncertainty. We hypothesize that reformulating the fake/real discrimination in the adversarial training to a correct/incorrect distinction scheme will mitigate the issue. To verify this, the mean and standard deviation of the maximum class-likelihood value in every softmax vector for each pixel is tracked on the validation set over different training epochs and the results are depicted in Figure 3. We conducted this experiment with the U-Net based architecture on Cityscapes, but we observed the same phenomena with the other segmentation network and on the other dataset. One can observe that for both the standard adversarial training and EL-GAN that discriminate the real from the fake predictions, the predictions are converging towards one, with barely any standard deviation. For the gambling adversarial networks, the uncertainty of the predictions is well-preserved. In Table 1, the average mean maximum over the last 10 epochs is shown, which confirms that the gambling adversarial networks maintain the ability to express the uncertainty similar to the cross-entropy model, while the existing adversarial methods attempt to converge to a one-hot vector. U-Net based segmenter. First, we compare the baselines with the gambling adversarial networks on the Cityscapes [6] validation set with the U-Net based architecture. The results in Table 2 show the gambling adversarial networks perform better on the pixel-wise metric (IoU), but also on the structural metrics. In Table 3 most all classes. Moreover, similar to the IoU, we observe the most significant improvements on the more fine-grained classes, such as rider and pole. In Figure 4, one qualitative sample is depicted, in the supplementary material more samples are provided. The adversarial methods resolve some of the artifacts, such as the odd pattern in the car on the left. Moreover, the boundaries of the pedestrians on the sidewalk become more precise. We also provide an example betting map predicted by the gambler, given the predictions from the baseline model trained with cross-entropy in combination with the RGBimage. Note that the gambler bets on the badly shaped building in the prediction and responds to the artifacts in the car. PSPNet segmenter. We conduct experiments with the PSPNet [48] segmenter on the Camvid [2] and Cityscapes [6] datasets. In Table 5, the results are shown on the Cityscapes validation set. Again, the gambling adversarial networks perform better than the existing methods, on both of the structure-based metrics as well as the mean IoU. In Figure 5, a qualitative sample is shown, more can be found in the supplementary material. The gambling adversarial networks provides more details to the traffic lights. Also, the structure of the sidewalk shows significant improvements over the predictions from the model trained with standard segmentation loss. The quantitative results on the Camvid [2] test set are shown in Table 6. The gambling adversarial networks achieve the highest score on the structure-based metrics, but the standard adversarial training [32] performs best on the IoU. In the supplementary material, we provide qualitative results for the Camvid [2] test set and extra images for the aforementioned experiments. Discussion Correct/incorrect versus real/fake discrimination. We reformulated the adversarial real/fake discrimination task into training a critic network to learn to spot the likely incorrect predictions. As shown in Figure 3, the discrimination of real and fake causes undesired gradient feedback, since all the softmax vectors converge to a one-hot vector. We Table 4: BF-score [7] per class on the validation set of Cityscapes [6] with U-Net based architecture [21] as segmentation network empirically showed that this behavior is caused by a valuebased discrimination of the adversarial network. Moreover, modifying the adversarial task to correct/incorrect discrimination solves several problems. First of all, the reason to apply adversarial training to semantic segmentation is to improve on the high-level structures. However, the value-based discriminator is not only providing feedback based on the visual difference between the predictions and the labels, but also an undesirable value-based feedback. Moreover, updating the weights in a network with the constraint that the output must be a one-hot vector complicates training unnecessarily. Finally, the value-based discriminator Figure 5: Qualitative results on Cityscapes [6] with PSP-Net [48] as segmentation network hinders the network from properly disclosing uncertainty. Both the structured prediction and expressing uncertainty can be of great value for semantic segmentation, e.g. in autonomous driving and medical imaging applications. However, changing the adversarial task to discriminating the correct from the incorrect predictions resolves the aforementioned issues. The segmentation network is not forced to imitate the one-hot vectors, which preserves the uncertainty in the predictions and simplifies the training. Although we still notice that the gambler sometimes utilizes the prediction values by betting on pixels where the segmenter is uncertain, we also obtain improvements on the structure-based metrics compared to the existing adversarial methods. Gambling adversarial networks vs. focal loss The adversarial loss in gambling adversarial networks resembles the focal loss [30], since both methods up-weight the harder samples that contain more useful information for the update. The focal loss is defined as: L f oc (y,ŷ, p t ) = −(1 − p t ) γ y logŷ, where p t is the probability of the correct class and γ is a focusing factor, which indicates how much the easier samples are down-weighted. The advantage of the focal loss is that the ground-truth is exploited to choose the weights, however, the downside is that the focal loss might be over-pronouncing the ambiguous or incorrectly labeled pixels. The adversarial loss in gambling adversarial networks learns the weighting map, which can mitigate the noise effect. Moreover, the adversarial loss generates an extra flow of gradients (flow B), as observable in Figure 2. Gradient stream A provides information to the segmentation network independent of other pixel predictions similar to the focal loss, whereas gradient stream B provides gradients reflecting structural qualities, which is lacking in case of the focal loss. Insights into betting maps Inspecting the betting maps (see for instance Figure 4), we observe that some of the bets correspond to the class borders, especially the ones that seemingly do not match the visual evidence in the underlying image or the expected shape of the object. We should note that even though there are chances that the groundtruth labels on the borders are different from the predictions, blindly betting on all the borders is not even close to the optimal policy. The clear bad structures in the predictions, e.g. the weird prediction of rider inside the car or the badly formed wall on the left side, are still more rewarding investments that are also being spotted by the gambler. Conclusion In this paper, we studied a novel reformulation of adversarial training for semantic segmentation, in which we replace the discriminator with a gambler network that learns to use the inter-pixel consistency clues to spot the wrong predictions. We showed that involving the segmenter in a minimax game with such a gambler results in notable improvements in structural and pixel-wise metrics, as measured on two road-scene semantic segmentation datasets. Supplementary Material In this section, we provide the hyperparameters for gambling adversarial networks and extra qualitative results for the different experiments. Hyperparameters In the following paragraphs, the hyperparameters for the experiments in the results section are described. U-Net based architecture on Cityscapes. Training details for the experiment on Cityscapes [6] with U-Net based architecture [21]. We trained the segmenter and gambler in alternating fashion of 200 and 400 iterations respectively over 300 epochs with a batch size of 4. The betting maps are calculated with a smoothing factor β of 0.02. Details for the segmenter and gambler are as following: • Segmenter: optimizer: Adam [24], learning rate: 1e-4, beta1: 0.5, beta2: 0.99, adversarial coefficient λ: 1.0, weight decay: 5e-4. • Gambler: optimizer: Adam [24], learning rate 1e-4, beta1: 0.5, beta2: 0.99, weight decay: 5e-4. PSPnet on Cityscapes. Training details for the experiment on Cityscapes [6] with PSPNet [48]. We trained the segmenter and gambler in alternating fashion of 800 and 800 iterations respectively over 200 epochs with a batch size of 3. The betting maps are calculated with a smoothing factor β of 0.02. Details for the segmenter and gambler are as following: • Segmenter: optimizer: Adam [24], learning rate: 2.5e-5, beta1: 0.5, beta2: 0.99, adversarial coefficient λ: 1.0, weight decay: 5e-4. • Gambler: optimizer: Adam [24], learning rate 2.5e-5, beta1: 0.5, beta2: 0.99, weight decay: 5e-4. PSPNet on Camvid. Training details for the experiment on Camvid [2] with PSPNet [48]. We trained the segmenter and gambler in alternating fashion of 100 and 200 iterations respectively over 100 epochs with a batch size of 2. The betting maps are calculated with a smoothing factor β of 0.02. Details for the segmenter and gambler are as following: • Segmenter: optimizer: Adam [24], learning rate: 5e-5, beta1: 0.5, beta2: 0.99, adversarial coefficient λ: 0.5, weight decay: 5e-4. • Gambler: optimizer: Adam [24], learning rate 5e-5, beta1: 0.5, beta2: 0.99, weight decay: 5e-4. Qualitative results In Figures 6 and 7, extra qualitative results are depicted for the experiments on the Cityscapes [6] validation set for the U-Net based architecture [21] and PSPNet [48] as segmentation network. In Figure 8, some samples are shown on the test set of Camvid [2] with PSPNet as segmentation network. Figure 6: From left to right: RGB-image, ground-truth, CE, betting-map, focal loss, CE + adv, EL-GAN, gambling nets. The betting map is a prediction with as input the RGB image and the CE prediction. Results are for the Cityscapes [6] validation set with the U-Net based architecture [21]. Best visible zoomed-in on a screen. Figure 7: From left to right: RGB-image, ground-truth, CE, betting-map, focal loss, CE + adv, EL-GAN, gambling nets. The betting map is a prediction with as input the RGB image and the CE prediction. Results are for the Cityscapes [6] validation set with PSPNet [48]. Best visible zoomed-in on a screen. Figure 8: From left to right: RGB-image, ground-truth, CE, betting-map, focal loss, CE + adv, EL-GAN, gambling nets. The betting map is a prediction with as input the RGB image and the CE prediction. Results are for the Camvid [6] test set with PSPNet [48]. Best visible zoomed-in on a screen.
4,106
1908.02256
2966257000
Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adverserial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations ( @math ), generates adverserial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the @math attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset by demonstrating high frequency noise is introduced into the input image by the @math algorithm. To alleviate the high frequency, we introduce a depthwise convolution layer of standard blur kernels after the first layer. Finally, we present a regularization scheme to incorporate this low-pass filtering behavior into the training regime of the network.
is the technique of injecting adverserial examples and the corresponding gold standard labels into the training set @cite_8 @cite_18 @cite_12 . The motivation of this methodology is that the network will learn the adverserial perturbations introduced by the attacker. The problem with adverserial training is that it doubles the training time of the classifier as new examples need to be generated. Moreover, as shown by , adversarial training needs all types of adverserial examples produced by all known attacks, as the training process is non-adaptive @cite_25 . Our method can be paired with any of these types of defenses.
{ "abstract": [ "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive---new systems and models are being deployed in every domain imaginable, leading to rapid and widespread deployment of software based inference and decision making. There is growing recognition that ML exposes new vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited. We systematize recent findings on ML security and privacy, focusing on attacks identified on these systems and defenses crafted to date. We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework. Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. We conclude by formally exploring the opposing relationship between model accuracy and resilience to adversarial manipulation. Through these explorations, we show that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used.", "", "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input." ], "cite_N": [ "@cite_18", "@cite_25", "@cite_12", "@cite_8" ], "mid": [ "1945616565", "2572504188", "", "1673923490" ] }
BlurNet: Defense by Filtering the Feature Maps
Machine learning has been ubiquitous in various fields like computer vision and speech recognition. [12,10] However, despite these advancements, neural network classifiers have been found to be susceptible to so called adversarial images [23]. These images are created by altering some pixels in the input space so that a human cannot distinguish it from a natural image but a deep neural network will misclassify the input [9]. This obviously has severe implications considering the rise of self-driving cars and computer vision systems being installed in industrial applications. Many different types of attacks exist in the deep learning literature. Some of the most popular white-box attacks like the Fast Gradient Sign Method (FGSM) attack and Projected Gradient Descent Attack (PGD) use the gradients of the network to perturb the input [9,16]. Other attacks such as the Carlini Wagner (CW) attack formulate an optimization problem to minimize the distance D between an input x and a perturbed input x + δ such that x + δ must lie within a specified box constraint with D being the L ∞ , L 2 , L 1 distance metric [3]. Yet another class of attacks are those that physically alter the object to be classified with stickers and graffiti which causes the classifier to incur a misclassification [13,8]. In this paper, we are interested in exploring a defense to the attack proposed in Robust Physical-World Attacks [8]. In that work, the authors designed a general attack algorithm, Robust Physical Perturbations (RP 2 ) to generate visual adverserial perturbations which are supposed to mimic realworld obstacles to object classification. They sample physical stop signs from varying distances The spectrum has been log-shifted and normalized. Frequencies close to the center correspond to lower frequencies and those that are near the edges correspond to higher ones. Observing the spectrum of the stop sign does not give a clear indication where the perturbations from the stickers lie. Yellow corresponds to regions with most information content. and angles and use a mask to projected a computed perturbation onto these images. On a standard classifier for road classification, their attack is 100% successful in misclassifying stop signs. Many defenses use spatial smoothing [14,25,15] as means to stamp out the perturbation caused by adverserial attacks. Unforunately this approach is not always effective if the perturbation is in the form of a peice of tape on a stop sign. To verify this, we plot the Fast Fourier Transorm (FFT) spectrum of a vanilla and perturbed stop sign in Figure 1. Qualitatively, there does not seem to be any significant difference between the two spectrums making filtering the input a questionable defense. Instead, we introduce the simple solution of adding a lowpass filter to perform depthwise convolution after the first layer in the network. The main idea is to curb the spikes in the feature map caused by the perturbations by convolving them with a standard blur kernel. This will attenuate the some of the signal at the output layer but in turn will squash the spikes. We begin by giving an overview of the RP 2 algorithm and some background on machine learning security. In section 3, we discuss details on adding a filtering layer to dampen high frequency perturbations and perform a blackbox evaluation of RP 2 compared with input filtering. In section 4, we propose a regularization scheme for the network to learn the optimal parameters to incorporate the low-pass filtering behavior using the L ∞ norm and total variation minimzation [21]. We then perform a white-box evaluation with RP 2 and find that our algorithm is able to reduce the attack success rate from 90% to 17.5% compared to the baseline classifier. Problem definition Consider a neural network to be a function F (x) = y, such that F : R m → R n , where x is the input image and m = h * w * c such that h, w, c are the image height, image width, and channel number and n is a class probability vector of length of number of classes denoting the class of the image. The goal of an attack algorithm is to generate an image, x adv , so the classifier output mislabels the input image in this manner, F (x adv ) = y. The attack success rate is defined as the number of predictions altered by an attack, that is, 1 N N n=1 1[F (x n ) = F (x nadv )] . Another metric to characterize the attacker's success is the dissimilarity distance between natural and adverserial images, 1 N N n=1 ||x − x adv || p ||x adv || p .(1) where p can take different values depeding on the norm chosen; we restrict ourselves to the L 2 case. An adverserial attack is considered strong if the attack success rate is high while having a low dissimilarity distance. Attack and Threat Model We provide a description of the threat model that was considered when developing our defense. Robust Physical Perturbations Attack This algorithm is restricted to the domain of road sign classification, focused on finding effective physical perturbations that are agnostic to unconstrained environmental conditions such as distance and the viewing angle of the sign. This is called the Robust Physical Perturbation Attack, known as RP 2 . RP 2 is formulated as a single-image optimization problem which seeks to find the optimal perturbation δ to add an image x, such that the perturbed input x = x + δ causes the target classifier, f θ (·) to incur a misclassification: minH(x + δ, x), s.t.f θ (x + δ) = y * where H is a chosen distance function and y * is the target class. To solve the above constrained optimization problem, it must be reformulated as a Lagrangianrelaxed form [3]. The images that are fed into the classifier are mixed in with those from the physical world with those that are synthetically transformed. This is done because approximating the variable physical conditions is difficult with digital synthetic transformations on those images. The synthetic transformations include changing the brightness, cropping the image, and adding other spatial transformations to the input. Eykholt et al. model the distribution X V as the distribution of both the physical and digital images. Furthermore, this threat model differs from all the others [9,3,13] in that the noise introduced must be concentrated on a specific region of image. In the context of road sign classification, an attacker cannot alter the background of the scene so is therefore constrained to the sign. To mimic this effect, a binary mask, M x , is multiplied with the perturbation, δ, to concentrate the perturbation onto the sign. Since the perturbation is printed in the real world, we need to account for fabrication error, which ensures that the perturbation produced is a valid color in the physical world. This is quantified by the non-printability score(NPS), defined by Sharif et al. [22] given by: N P S = p∈R(δ) p ∈P |p − p |, where P is a set of printable colors and R(δ) is the set of RGB triples used in the perurbation. The final formulation of the optimization of the perturbation is presented as follows: argmin δ λ M x · δ p + N P S + E xi∼X V J(f θ (x i + T i (M x · δ)), y * ).(2) Here T i is an alignment function for the masked perturbation that is used to account for if the image, x i , was transformed so it can be placed optimally. For the distance metric in || · ||, both the L 1 and L 2 norms can be considered. In their experiments, Eykholt et al. found that L 1 regularization can find a sparse perturbation vector, attacking the vulnerable regions of the sign. They then recompute the perturbations with another mask placed over these regions with L 2 regularization. More details on the algorithm can be obtain from [8]. Transferability Another aspect of these adverserial attack algorithms is the transferability property. The idea behind transferability is that adverserial examples that are generated from a model where all the parameters Each row corresponds to one unique feature map. The first column is the spectrum of feature maps of an unperturbed stop sign. The second column corresponds to the spectrum of feature maps of a stop sign with the sticker attached. The third column is a difference between the unperturbed and perturbed spectrum. Finally, the fourth column is a blurred version of the difference spectrum. Values were normalized. are known, can be transfered over to another model that is not known to the attacker. It has been shown these transferability attacks can be performed between different classes of classifiers such as deep neural networks(DNNs), SVMs, nearest neighbors, etc. [18] The motivation for black box attack models arises from this property wherein the adversary is aware of the defense being deployed but does not have access to any of the network parameters or the exact training data [4]. This is the most difficult threat setting for the adversary to operate under as opposed to a white-box setting, in which all the information about the model parameters are known. Dataset and Model We adopt the setup from [8] by examining the LISA dataset [1] and a standard 4 layer DNN classifier in the Cleverhans framework [17]. LISA is a standard U.S traffic sign dataset containing 47 various signs, but since there exists a large class imbalance, we only consider the top 18 classes, just as [8]. The network architecture is comprised of 3 convolution layers and a fully-connected layer. We train all the classifiers for 2000 epochs with the ADAM optimizer with β 1 = 0.9, β 2 = 0.999, and = 10 −8 . We evaluate our defense based on a sample set of 40 stop sign images provided by [8] from their github repo. When we attempted to recreate the mask to place on the stop sign, we realized that the mask was manually generated after the L 1 optimization of the perturbation. As a result, we directly used the mask provided by the authors for our experiments. We leave the generation of the mask for the evaluation as future work. Motivation We begin by analyzing the effects of adding a sticker via the RP 2 algorithm from observing the feature maps of the classifier. Understanding differences in activations in both natural and adverserial examples can inform the design of an appropriate defense strategy. When we visualize the feature maps, we can observe an unwanted spike from the activation maps from the first layer in the spatial location where the mask is inserted over the sign. These spikes are large enough that as the activations propagate through the network they cause the classifier to misclassify the input [26,9]. Based on the assumptions of the threat model, the perturbation is constrained to be on the stop sign, which suggests that the neighboring values around the region of the perturbation are dissimilar in the activation map. In general, we would normally expect smooth transitions in the activation map of images; that is, Figure 3: The FFT Spectrum of a subsampling of feature maps from the second layer of the network. These feature maps were obtained from a normal stop sign. The high values indicated at the edges of the spectrum suggests that higher frequency information is relevant to maintain decent classification. neighbor activations within some spatial window should have approximately similar values. As a motivating example, we applied a standard 5x5 low-pass blur kernel over each of the feature maps. As a result of applying the filter the impact of the spike was substantially smaller. This initial analysis motivates us to propose a simple solution by applying a set of low-pass filters to the output of first layer of the network. For isolated spikes that are caused by adversarial perturbations, low-pass filters are a natural fit to smooth out unexpected spatial transitions in the activation maps. We focus exclusively on the feature maps after the first layer since spatial locality of the perturbation is still preserved. We insert these filters by performing a depthwise convolution on the feature maps to ensure that the filters are applied independently to each channel [5]. To evaluate the efficacy of inserting the depthwise convolution layer, we transform the feature maps into the frequency domain by computing the Fast Fourier Transform (FFT) of the natural image, adverserial image, and their respective blurred images, as shown in Figure 2. The spectrum is on a log-scale and shifted so points close to the center correspond to lower frequecies and points near the edges to higher frequencies. Based on Figure 2, most of the high frequency artifacts introduced from the perturbations were removed. We do observe some low-frequency components that were induced by the attack, but the influence from these, compared to the high frequency spikes, is much lower. Inserting filters in higher layers We choose only to look at the feature maps after the output of the first layer. We explored adding filters into higher layers of the neural network and we find that these reduce classification accuracy. We hypothesize the reason for this accuracy loss is that the higher layers in the network naturally contain high frequency information. We verify this hypothesis by computing the fast fourier transform of the activations maps of the higher level convolutional layers given in Figure 3. From Figure 3, we can see that the magnitude spectrum shows that the difference between higher and lower frequencies is not pronounced. If a low pass filter is introduced at this level in the network, too much information is lost for the DNN to make a meaningful prediction. In order to maintain classification accuracy, high frequencies in the feature maps should not be squashed. Adding a set of filters to the higher levels of the network is also difficult to justify from a semantic perspective, since the spatial locality of the features is not preserved, as the the receptive field of the neurons in upper layers is wider and even discontinuous due to non-unit convolution strides and/or max-pooling layers. Table 1, for lower kernel sizes, while blurring the input does not have much of an impact on the accuracy, it is not effective in alleviating the noise introduced by RP 2 . Compared to blurring the input, adding the blur kernels to the features generated by the first layer seems to effectively reduce the attack success rate at the cost of accuracy. This result motivates an attempt to alter the training regime so it learns the gain parameters in the filter implicitly, rather than setting them to predefined known values, so that robustness can be maintained at a minimal accuracy loss. Learning the Filter Parameters From the previous section, we see that filtering the feature maps is an effective scheme at discarding the perturbations introduced from RP 2 . However, the side effect of naively inserting a layer of low-pass filters is that the confidence of the prediction is reduced. In certain application domains such as autonomous vechicles, low confidence predictions from the classifier may not be acceptable. For correcting the reduction in the confidence, we seek to incorporate an additional loss term into the training of the classifier. We explore two different options for loss terms: L ∞ norm on the weights of the depthwise layer (added filter layer) and total variance(TV) minimization applied to the feature map (no added layer) [21]. To emulate the effect of adding a layer of low pass filters, the L ∞ norm is an apt choice for the depthwise weights. This will ensure that the weights in the kernel take similar values to each much like a low pass filter. The resulting loss that is minimized, where K is the number of channels in the input, is: min α K j=1 W depthwise [:, :, j] ∞ + J(f θ (x, y)).(3) Alternatively, we introduce the TV loss term into the optimization algorithm for the classifier, without adding an additional depthwise convolution to the network. Total variation of the image measures the pixel-level deviations for the nearest neighbor and minimizes the absolute difference between those neighbors. For a given image, the TV of an input image x is given as: T V (x) = i,j |x i+1,j − x i,j | + |x i,j+1 − x i,j |.(4) In general, the total variation loss is not differentiable so we choose an approximation of the function so that it can be optimized. We omit the depthwise convolution layer from the network and instead let the first layer of the network learn to filter out the high-frequency spikes in its feature map. Defining F as the set of feature maps after the first layer, the final optimization objective is given as: min α T V 1 N · K N i=1 K i=1 T V (F[i, :, :, k]) + J(f θ (x, y)),(5) where N and K are the batch size and the number of output channels, respectively. Intuitively, TV removes effects of details that have high gradients in the image, effectively targeting the perturbations introduced by RP 2 for denoising. TV encourages the neighboring values in the feature maps to be similar so the high spike introduced by RP 2 would be diminished. We perform a white-box evaluation and sweep the hyperparameters in the attack algorithm, λ and the attack target, y * . Our results are reported in Table 2. In the white-box setting, the attacker has access to all the model parameters as well as the classification output. The main goal of the evaluation is to detect if the attack algorithm is able to introduce low-frequency perturbations to circumvent the filtering defense. The legitimate accuracy corresponds to the accuracy on the test set. We ran the attack algorithm for 300 epochs. When we sweep the parameters, we find that the attack target is the parameter most sensitive to increasing the sucess rate of the attack and is relatively invariant to λ. Certain attack targets are more amenable to attacks because there may be steeper gradients of the loss function with respect to those target labels. The legitimate accuracy refers to the accuracy on the test set and the average success rate is the attack success rate averaged across all 17 classes. (We omit the correct class) We also report the best case scenario for the attacker and the L 2 dissimilarity distance. We find that the TV minimization loss term has superior performance compared to all the other methods, bringing the attack success rate down to 17.5% at a L 2 dissimilarity distance of 0.224. TV is effectively encouraging the first convolutional weight to not only act as a feature extractor but also to stifle high variations coming from the input. For the depthwise convolution layer with the L ∞ norm regularizer, as the width of the filter increases, the network is able to attenuate the attack success rate because a large window from the surrounding neighbors will be able to smooth out the perturbation. However, the TV loss is better than applying the L ∞ as it is directly able to influence the weights to behave like a low-pass filter rather than indirectly through the L ∞ norm. In Figure 4, we plot the L 2 dissimilarity distance against the attack success rate to show the variation of each defense methods across the target labels. We find that TV loss terms have less variation than the other depthwise convolution layers. This supports the previous assertion that the TV term enables the first layer weights to adapt to low-pass behavior. Related Work Many kinds of defenses have been proposed in the machine learning security literature. There seems to have been two kinds of approaches to developing defenses: robust classification and detection. Robust classification refers to the classifier being able to correctly classify the input despite the perturbation whereas detection refers to a scheme of identifying if an example has been tampered with and rejecting it from the classifier. Recently, detection methods have seen much more popularity than robust classification (our method belongs to the latter class). However, in certain domains such as autonomous vehicles, it is not always feasible to reject the input from classifier. Robustness Adverserial training is the technique of injecting adverserial examples and the corresponding gold standard labels into the training set [23,9,16]. The motivation of this methodology is that the network will learn the adverserial perturbations introduced by the attacker. The problem with adverserial training is that it doubles the training time of the classifier as new examples need to be generated. Moreover, as shown by Papernot et al., adversarial training needs all types of adverserial examples produced by all known attacks, as the training process is non-adaptive [20]. Our method can be paired with any of these types of defenses. Input transformations Most previous work has applied some type of transform to the input image. In their paper, Guo et. al. use total variance minimization and image quilting to transform the input image. They use random pixel dropout and reconstruct the image with the removed perturbation [11]. Dziugaite et al. examined the effects of JPEG compression on adverserial images generated from the Fast Sign Gradient Method (FGSM) [7,9]. They report that for perturbations of small magnitude JPEG compression is able to recover some of the loss in classification accuracy, but not always. Xu et al. introduce feature squeezing, a detection method based on reducing the color bit of each pixel in the input and spatially smoothing the input with a median filter [25]. In their paper, Li et al. propose detecting adverserial examples by examining statistics from the convolutional layers and building a cascade classifier. They discover that they are able to recover some of the rejected samples by applying an average filter [14]. Liang et al. looked at using image processing techniques such as scalar quantization and a smoothing spatial filter to dampen the perturbations introduced. The authors introduce a metric, which they define as image entropy, to use different types of filters to smooth the input [15]. We stress that the key difference between these approaches and the proposed methods is that we introduce a change in the model architecture to incorporate it directly into the training process. Gradient Masking Gradient masking refers to the phenomenon of the gradients being hidden from the adversary by reducing model sensitivity to small changes applied to the input [19]. These can be due to operations that are added to the network that are not differentiable so regular gradient based attacks are insufficient. Another class of gradient masking includes introducing randomization into the network. Stochastic Activation Pruning essentially performs dropout at each layer where nodes are dropped according to some weighted distribution [6]. Xie et al. propose a randomization in which the defense randomly rescales and randomly zero-pads the input to an appropriate shape to feed to the classifier [24]. However, as Athalye et. al have shown in their paper, gradient masking is not an effective defense since the adversary can apply the Backward Pass Differential Approximation attack, in which the attacker approximates derivatives by computing the forward path and backward path with an approximation of the function. Even against randomization, the authors introduce another attack, Expectation over Transformation (EOT), where the optimization algorithm minimizes the expectation of the transformation applied to the input [2]. Conclusion We performed spectral analysis of the feature maps and saw that attacks introduce high-frequency components, which are amenable to low-pass filtering. Our proposal introduced a simple solution of adding low-pass filters after the first layer of the DNN. We compare with this with blurring the input image and show that blurring at the feature level can confer some robustness benefit at the cost of some accuracy by performing a black-box transferability attack with RP 2 . To compensate for the loss in accuracy, we explore two regularization schemes: adding a depthwise convolution and total variation minimization and show that we can recover the loss in accuracy while retaining significant robustness benefit. In the future, we hope to examine more types of attack algorithms and apply them to various datasets with our defense, in addition developing new attacks to circumvent this defense.
4,058
1908.01823
2965547769
We propose a general approach for change-point detection in dynamic networks. The proposed method is model-free and covers a wide range of dynamic networks. The key idea behind our approach is to effectively utilize the network structure in designing change-point detection algorithms. This is done via an initial step of graphon estimation, where we propose a modified neighborhood smoothing (MNBS) algorithm for estimating the link probability matrices of a dynamic network. Based on the initial graphon estimation, we then develop a screening and thresholding algorithm for multiple change-point detection in dynamic networks. The convergence rate and consistency for the change-point detection procedure are derived as well as those for MNBS. When the number of nodes is large (e.g., exceeds the number of temporal points), our approach yields a faster convergence rate in detecting change-points comparing with an algorithm that simply employs averaged information of the dynamic network across time. Numerical experiments demonstrate robust performance of the proposed algorithm for change-point detection under various types of dynamic networks, and superior performance over existing methods is observed. A real data example is provided to illustrate the effectiveness and practical impact of the procedure.
@cite_5 proposes a novel estimator for estimating the link probability matrix @math of an undirected network by neighborhood smoothing (NBS). The essential idea consists of the following: Given an adjacent matrix @math , the link probability @math between node @math and @math is estimated by where @math is a certain set of neighboring nodes of node @math , which consists of the nodes that exhibit similar connection patterns as node @math . With a well-designed neighborhood adaptive to the network structure, the smoothing achieves an accurate estimation for @math . NBS in @cite_5 estimates @math with a single adjacency matrix @math . For a dynamic network, a sequence of adjacency matrices @math is available, which provides extra information of the network. By aggregating information from repeated observations across time, in Section , we propose a modified NBS by carefully shrinking the neighborhood size, which yields a better convergence rate in estimating the link probability matrix @math and thus an improved rate in change-point detection.
{ "abstract": [ "SummaryThe estimation of probabilities of network edges from the observed adjacency matrix has important applications to the prediction of missing links and to network denoising. It is usually addressed by estimating the graphon, a function that determines the matrix of edge probabilities, but this is ill-defined without strong assumptions on the network structure. Here we propose a novel computationally efficient method, based on neighbourhood smoothing, to estimate the expectation of the adjacency matrix directly, without making the structural assumptions that graphon estimation requires. The neighbourhood smoothing method requires little tuning, has a competitive mean squared error rate and outperforms many benchmark methods for link prediction in simulated and real networks." ], "cite_N": [ "@cite_5" ], "mid": [ "2962719258" ] }
Change-point detection in dynamic networks via graphon estimation
The last few decades have witnessed rapid advancement in models, computational algorithms and theories for inference of networks. This is largely motivated by the increasing prevalence of network data in diverse fields of science, engineering and society, and the need to extract meaningful scientific information out of these network data. In particular, the emerged field of statistical network analysis has spurred development of many statistical models such as latent space model [28], stochastic block model and their variants [12,13,2,30], and associated algorithms [36,23,17,1] for various inference tasks including link prediction, community detection and so on. However, the existing literature has been mostly focused on the analysis of one (and often large) network. While inference of single network remains to be an important research area due to its abundant applications in social network analysis, computational biology and other fields, there is emerging need to be able to analyze a collection of multiple network objects [11,14,22,4], with one notable example being temporal or dynamic networks. For example, it has become standard practice in many areas of neuroscience (e.g., neuro-imaging) to use networks to represent various notions of connectivity among regions of interest (ROI) in the brain observed in a temporal fashion when the subjects are engaged in some learning tasks. Analysis of such data demands development of new network models and tools, and leads to a growing literature on inference of dynamic networks, see e.g., [32,26,31]. Our work focuses on change-point detection in dynamic networks, which is an important yet less studied aspect of learning dynamic networks. The key insight of our proposed approach is to effectively utilize the network structure for efficient change-point detection in dynamic networks. This is done by first performing graphon estimation (i.e. link probability matrix estimation) for the dynamic network, which then serves as basis of a screening and thresholding procedure for change-point detection. For graphon estimation in dynamic networks, we propose a novel modified neighborhood smoothing (MNBS) algorithm, where a faster convergence rate is achieved via simultaneous utilization of the network structure and repeated observations of dynamic networks across time. Most existing literature on change-point detection in dynamic networks [e.g., 25,18,35,8] rely on specific model assumptions and only provide computational algorithms without theoretical justifications. In contrast, our method is nonparametric/model-free and thus can be applied to a wide range of dynamic networks. Moreover, we thoroughly study the consistency and convergence properties of our change-point detection procedure and provide its theoretical guarantee under a formal statistical framework. Numerical experiments on both synthetic and real data are conducted to further demonstrate the robust and superior performance of the proposed method. The paper is organized as follows. Section 2 discusses related work. Section 3 proposes an efficient graphon (link probability matrix) estimation method -MNBS, and Section 4 introduces our changepoint detection procedure. Numerical study on synthetic and real networks is carried out in Section 5. Our work concludes with a discussion. Technical proofs, additional numerical studies and suggestions of an additional graphon estimator can be found in the supplementary material. Modified neighborhood smoothing for dynamic networks In this section, we propose a neighborhood smoothing based estimator for link probability matrix estimation given repeated observations of an undirected network. This later serves as the basis for the proposed algorithm of multiple change-point detection for dynamic networks in Section 4. The basic setting is as follows. Given a network with a link probability matrix P , assume one observes independent (symmetric) adjacency matrices A (t) (t = 1, . . . , T ) such that A (t) ij ∼ Bernoulli(P ij ) for i ≤ j, independently. Based on the repeated observations {A (t) } T t=1 , we want to estimate the link probability matrix P . Note that when T = 1, this reduces to the classical problem of link probability matrix estimation for a network (e.g., [6], [10] and [39]). In particular, the neighborhood smoothing (NBS) proposed by [39] is a computationally feasible algorithm which enjoys competitive error rate and is demonstrated to work well for real networks. Motivated by NBS, we propose a modified neighborhood smoothing (MNBS) algorithm, which incorporates the repeated observations of the network across time and thus further improves the estimation accuracy of NBS. LetĀ = T t=1 A (t) /T, we define the distance measure between node i and i as in [39] such that d 2 (i, i ) = max k =i,i | Ā i· −Ā i · ,Ā k· |, where A i· denotes the ith row of A and · · denotes the inner product of two vectors. Based on the distance metric, define the neighborhood of node i as N i = i = i :d(i, i ) ≤ q i (q) ,(1) where q i (q) denotes the qth quantile of the distance set d (i, i ) : i = i . Given neighborhood N i for each node i, we define the modified neighborhood smoothing (MNBS) estimator as P ij = i ∈NiĀ i j |N i | .(2) Note that q is a tuning parameter and affects the performance of MNBS via a bias-variance trade-off. In [39], where T = 1, the authors set q = C(log n/n) 1/2 for some constant C > 0. Thus, for each node i, the size of its neighborhood |N i | is roughly C(n log n) 1/2 . For MNBS, we set q = C log n/(n 1/2 ω), where ω = min(n 1/2 , (T log n) 1/2 ). When T = 1, MNBS reduces to NBS. For T > 1, we have log n/(n 1/2 ω) < (log n/n) 1/2 and thus MNBS estimates P by smoothing over a smaller neighborhood. From a bias-variance trade-off point of view, the intuition behind this modification is that we can shrink the size of N i to reduce the bias ofP ij introduced by neighborhood smoothing while the increased variance ofP ij due to the shrunken neighborhood can be compensated by the extra averaged information brought by {A (t) } T t=1 across time. We proceed with studying theoretical properties of MNBS. We assume the link probability matrix P is generated by a graphon f : [0, 1] 2 × N → [0, 1] such that f (x, y) = f (y, x) and P ij = f (ξ i , ξ j ), for i, j = 1, . . . , n, and ξ i i.i.d. ∼ Uniform[0, 1]. As in [39], we study properties of MNBS for a piecewise Lipschitz graphon family, where the behavior of the graphon function f (x, y) is regulated in the following sense. x 0 < · · · < x K = 1 satisfying min 0≤s≤K−1 (x s+1 − x s ) > δ, and (ii) both |f (u 1 , v) − f (u 2 , v)| ≤ L|u 1 − u 2 | and |f (u, v 1 ) − f (u, v 2 )| ≤ L|v 1 − v 2 | hold for all u, u 1 , u 2 ∈ [x s , x s+1 ), v, v 1 , v 2 ∈ [x t , x t+1 ) and 0 ≤ s, t ≤ K − 1. As is illustrated by [39], this graphon-based theoretical framework is a general model-free scheme that covers a wide range of exchangeable networks such as the commonly used Erdös-Rényi model and stochastic block model. See more detailed discussion about Definition 3.1 in [39]. For any P, Q ∈ R n×n , define d 2,∞ , the normalized 2, ∞ matrix norm, by d 2,∞ (P, Q) = n −1/2 P − Q 2,∞ = max i n −1/2 P i· − Q i· 2 . We have the following error rate bound for MNBS. The sample size T is implicitly taken as a function of n and all limits are taken over n → ∞. Theorem 3.2 (Consistency of MNBS). Assume L is a global constant and δ = δ(n, T ) depends on n, T satisfying lim n→∞ δ/(n −1/2 ω −1 log n) → ∞ where ω = ω(n, T ) = min(n 1/2 , (T log n) 1/2 ), then the estimatorP defined in (2), with neighborhood N i defined in (1) and q = B 0 log n/(n 1/2 ω) for any global constant B 0 > 0 satisfies max f ∈F δ;L P d 2,∞ (P , P ) 2 ≥ C log n n 1/2 ω ≤ n −γ , for any γ > 0, where C is a positive global constant depending on B 0 and γ. It is easy to see that the error rate in Theorem 3.2 also holds for the normalized Frobenius norm d F (P , P ) = n −1 P − P F . For T = 1, ω = (log n) 1/2 and MNBS recovers the error rate (log n/n) 1/2 of NBS in [39]. For n > T , which is the realistic case for repeated temporal observations of a large dynamic network, we can set ω = (T log n) 1/2 for MNBS and thus have max f ∈F δ;L P d 2,∞ (P , P ) 2 ≥ C log n nT 1/2 ≤ n −γ . In other words, the network structure among n nodes and the repeated observations along time dimension T both help achieve better estimation accuracy for MNBS. In contrast, it is easy to see that a simply averagedĀ across time T cannot achieve improved performance when n increases. Remark 3.3. Another popular estimator of P , which is more scalable for large networks, is the USVT (Universal Singular Value Thresholding) proposed by [6]. In Section 8 of the supplementary material, we propose a modified USVT for dynamic networks by carefully lowering the singular value thresholding level forĀ. However, the convergence rate of the modified USVT is shown to be slower than MNBS. Thus we do not pursue USVT-based change-point detection here. MNBS-based multiple change-point detection In this section, we propose an efficient multiple change-point detection procedure for dynamic networks, which is built upon MNBS. We assume the observed dynamic network {A (t) } T t=1 are generated by a sequence of probability matrices {P (t) } T t=1 with A (t) ij ∼ Bernoulli(P (t) ij ) for t = 1, . . . , T . We are interested in testing the existence and further estimating the locations of potential change-points where P (t) = P (t+1) . More specifically, we assume there exist J (J ≥ 0) unknown change-points τ 0 ≡ 0 < τ 1 < τ 2 < . . . < τ J < T ≡ τ J+1 such that P (t) = P j , for t = τ j−1 + 1, . . . , τ j , and j = 1, . . . , J + 1. In other words, we assume there exist J + 1 non-overlapping segments of (1, . . . , T ) where the dynamic network follows the same link probability matrix on each segment and P j is the link probability matrix of the jth segment satisfying P j = P j+1 . Denote J = {τ 1 < τ 2 < . . . < τ J } as the set of true change-points and define J = ∅ if J = 0. Note that the number of change-points J is allowed to grow with the sample size (n, T ). A screening and thresholding change point detection algorithm For efficient and scalable computation, we adapt a screening and thresholding algorithm that is commonly used in change-point detection for time series, see, e.g. [16], [24], [40] and [38]. The MNBS-based detection procedure works as follows. Screening: Set a screening window of size h T . For each t = h, . . . , T − h, we calculate a local window based statistic D(t, h) = d 2,∞ (P t1,h ,P t2,h ) 2 , whereP t1,h andP t2,h are the estimated link probability matrices based on observed adjacency matrices {A (i) } t i=t−h+1 and {A (i) } t+h i=t+1 respectively by MNBS. The local window size h T is a tuning parameter. In the following, we only consider the case where (h log n) 1/2 ≤ n 1/2 , which is the most likely scenario for real data applications and thus is more interesting. Therefore, for MNBS, we can set ω = min(n 1/2 , (h log n) 1/2 ) = (h log n) 1/2 and q = B 0 (log n) 1/2 /(n 1/2 h 1/2 ). The result for (h log n) 1/2 > n 1/2 can be derived accordingly. Intuitively, D(t, h) measures the difference of the link probability matrices within a small neighborhood of size h before and after t, where a large D(t, h) signals a high chance of being a change-point. We call a time point x an h-local maximizer of the function D(t, h) if D(x, h) ≥ D(t, h), for all t = x − h + 1, . . . , x + h − 1. Thresholding: Let LM denote the set of all h-local maximizers of the function D(t, h), we estimate the change-points by applying a thresholding rule to LM such that J = {t|t ∈ LM and D(t, h) > ∆ D }, whereĴ is the set of estimated change-points,Ĵ = Card(Ĵ ) and ∆ D is the threshold taking the form ∆ D = ∆ D (h, n) = D 0 (log n) 1/2+δ0 n 1/2 h 1/2 , for some constants D 0 > 0 and δ 0 > 0. Note that ∆ D dominates the asymptotic order of the MNBS estimation error C(log n) 1/2 /(n 1/2 h 1/2 ) ofP t1,h andP t2,h quantified by Theorem 3.2. The proposed algorithm is scalable and can readily handle change-point detection in large-scale dynamic networks as MNBS can be easily parallelized over n nodes and the screening procedure is parallelizable over time t = h, . . . , T − h. Theoretical guarantee of the change-point detection procedure We first define several key quantities that are used for studying theoretical properties of the MNBSbased change-point detection procedure. Define ∆ j = d 2,∞ (P j , P j+1 ) 2 for j = 1, . . . , J and ∆ * = min 1≤j≤J ∆ j , which is the minimum signal level in terms of d 2,∞ norm. Also, define D * = min 1≤j≤J+1 (τ j − τ j−1 ), which is the minimum segment length. We assume for each segment j = 1, . . . , J + 1, its link probability matrix P j is generated by a piecewise Lipschitz graphon f j ∈ F δ;L as in Definition 3.1, where common constants (δ, L) are shared across segments. Note that J = J (n, T ), J = J(n, T ), ∆ * = ∆ * (n, T ), D * = D * (n, T ) and δ = δ(n, T ) are functions of (n, T ) implicitly. We have the following consistency result. . Assume there exists some γ > 0 such that T /n γ → 0. Assume L is a global constant and assume δ = δ(n, T ), h = h(n, T ) and D * = D * (n, T ) depend on n, T satisfying h < D * /2 and lim n→∞ δ/(n −1 h −1 log n) 1/2 → ∞. If assume further that the minimum signal level ∆ * = ∆ * (n, T ) exceeds the detection threshold ∆ D = D 0 (log n) 1/2+δ0 /(n 1/2 h 1/2 ), i.e. lim n→∞ ∆ * /∆ D > 1, then the MNBS-based change-point detection procedure with q = B 0 (log n) 1/2 /(n 1/2 h 1/2 ) satisfies lim n→∞ P {Ĵ = J} ∩ {J ⊂:Ĵ ± h} = 1, for any constants B 0 , D 0 , δ 0 > 0, where J ⊂:Ĵ ± h means τ j ∈ {τ j − h + 1, . . . ,τ j + h − 1} for j = 1, . . . , J. In particular, Theorem 4.1 gives a sure coverage property ofĴ in the sense that the true change-point set J is asymptotically covered byĴ ± h such that max j=1,...,J |τ j − τ j | < h. If h/T → 0, Theorem 4.1 impliesĴ = J and max j=1,...,J |τ j /T − τ j /T | < h/T → 0 in probability, which further implies consistency of the relative locations of the estimated change-pointsĴ . A remarkable phenomenon occurs if the minimum signal level ∆ * is strong enough such that ∆ * > (log n) 1/2+δ0 /n 1/2 . Under such situation, we can set h = 1 and by the sure coverage property in Theorem 4.1, the proposed algorithm recovers the exact location of the true change-points J without any error. This is in sharp contrast to the classical result for change-point detection under time series settings [37], where the optimal error rate for estimated change-point location is shown to be O p (1). This unique property is due to the fact that MNBS provides accurate estimation of the link probability matrix by utilizing the network structure within each network via smoothing. Remark 4.2 (Choice of tuning parameters). There are four tuning parameters of the MNBS-based detection algorithm: local window size h, neighborhood size B 0 and threshold size (D 0 , δ 0 ). For the window size h, a smaller h gives a better convergence rate of change-point estimation and is more likely to satisfy the constraint that h < D * /2. On the other hand, a smaller h puts a higher requirement on the detectable signal level ∆ * since we require ∆ * /∆ D > 1 with a smaller h implying a larger threshold ∆ D = D 0 (log n) 1/2+δ0 /(n 1/2 h 1/2 ). The only essential requirement on h is that h < D * /2, i.e., the local window should not cover two true change-points at the same time. In practice, as long as a lower bound of D * is known, h can be specified accordingly. For most applications, we recommend setting h = √ T . For the choice of B 0 and (D 0 , δ 0 ), note that Theorem 4.1 holds for any B 0 , D 0 , δ 0 > 0. Thus the choice of B 0 and (D 0 , δ 0 ) is more of a practical matter and specific recommendations are provided in Section 7.2 of the supplementary material, where MNBS is found to give robust performance across a wide range of tuning parameters. Remark 4.3 (Separation measure of signals). To our best knowledge, all existing literature that study change-points of dynamic networks use Frobenius norm as the separation measure between two link probability matrices. We instead use d 2,∞ norm, which in general gives much weaker condition than the one using Frobenius norm. See examples in Section 5.1. Numerical studies In this section, we conduct numerical experiments to examine the performance of the MNBSbased change-point detection algorithm for dynamic networks. For comparison, the graph-based nonparametric testing procedure in [7] is also implemented (via R-package gSeg provided by [7]) with type-I error α = 0.05. We refer to the two detection algorithms as MNBS and CZ. To operationalize MNBS, we need to specify the neighborhood q = B 0 (log n) 1/2 /(n 1/2 h 1/2 ) and the threshold ∆ D = D 0 (log n) 1/2+δ0 /(n 1/2 h 1/2 ). In total, there are four tuning parameters, h for the local window size, B 0 for the neighborhood size, and D 0 and δ 0 for the threshold size. In Section 7.2 of the supplementary material, we conduct extensive numerical experiments and provide detailed recommendations for calibration of the tuning parameters. In short, MNBS provides robust and stable performance across a wide range of tuning parameters. We refer readers to Section 7.2 of the supplementary material for detailed study of the tuning parameters. In the following, we recommend setting h = √ T , B 0 = 3, δ 0 = 0.1 and D 0 = 0.25. Performance on synthetic networks In this section, we compare the performance of MNBS and CZ under various synthetic dynamic networks that contain single or multiple change-points. We first define seven different stochastic block models (SBM-I to SBM-VII), which we then use to build various dynamic networks that exhibit different types of change behavior. Denote K B as the number of blocks in an SBM, denote M (i) as the membership of the ith node and denote Λ as the connection probability matrix between different blocks. Define M 1 (i) = I(1 ≤ i ≤ n/3 ) + 2I( n/3 + 1 ≤ i ≤ 2 n/3 ) + 3I(2 n/3 + 1 ≤ i ≤ n), where I(·) denotes the indicator function and x denotes the integer part of x. Define Λ1 =   0.6 0.6 − ∆nT 0.3 0.6 − ∆nT 0.6 0.3 0.3 0.3 0.6   , Λ2 =   0.6 + ∆nT 0.6 0.3 0.6 0.6 + ∆nT 0.3 0.3 0.3 0.6   , Λ3 = 0.6 0.3 0.3 0.6 , Λ4 =   0.6 + ∆nT 0.6 − ∆nT 0.3 0.6 − ∆nT 0.6 + ∆nT 0.3 0.3 0.3 0.6   , Λ5 = 0.6 0.6 − ∆nT 0.6 − ∆nT 0.6 . The seven SBMs are defined as: [SBM-I] K B = 2, M (i) = I(1 ≤ i ≤ 2 n/3 ) + 2I(2 n/3 + 1 ≤ i ≤ n), Λ = Λ 3 . [SBM-II] K B = 2, M (i) = I(1 ≤ i ≤ 2 n(1 − ∆ nT )/3 ) + 2I(2 n(1 − ∆ nT )/3 + 1 ≤ i ≤ n), Λ = Λ 3 . [SBM-III] K B = 3, M (i) = M 1 (i), Λ = Λ 1 (∆ nT ). [SBM-IV] K B = 3, M (i) = M 1 (i), Λ = Λ 2 (∆ nT ). [SBM-V] K B = 3, M (i) = M 1 (i), Λ = Λ 4 (∆ nT ). [SBM-VI] K B = 2, M (i) = I(1 ≤ i ≤ 2 n/3 ) + 2I(2 n/3 + 1 ≤ i ≤ n), Λ = Λ 5 (∆ nT ). [SBM-VII] K B = 2, M (i) = I(1 ≤ i ≤ 2 n/3 − 1) + 2I(2 n/3 ≤ i ≤ n), Λ = Λ 5 (∆ nT ). Dynamic networks with change-points: Based on SBM-I to SBM-VII, we design five dynamic stochastic block models (DSBM) with single or multiple change-points. The signal level ∆ * of each DSBM/MDSBM is controlled by ∆ nT and decreases as sample size (T, n) grows. The detailed signal level is summarized in Tables 4 and 5 of the supplementary material. Note that the signal level measured by (normalized) Frobenius norm d F can be at a considerably smaller order than the one measured by (normalized) d 2,∞ norm, indicating d 2,∞ norm is more sensitive to changes. This phenomenon is especially significant for DSBM-III, where d 2 2,∞ (P 1 , P 2 ) = 1/(n 1/3 T 1/4 ) and d 2 F (P 1 , P 2 ) = 2/(n 4/3 T 1/4 ), since only one node switches membership after the change-point, making the change very challenging to detect in terms of Frobenius norm. Simulation result: We vary n = 100, 500, 1000 and T = 100, 500. For each combination of sample size (T, n) and DSBM/MDSBM, we conduct the simulation 100 times. Note that for multiple change-point scenarios, we conduct change-point analysis on MDSBM-I when T = 100 and perform the analysis on MSDBM-II when T = 500. To assess the accuracy of change-point estimation, we use the Boysen distance as suggested in [5] and [40]. Specifically, denote J nT as the estimated change-point set and J nT as the true change-point set, we calculate the distance between J nT and J nT via ξ( J nT ||J nT ) = sup b∈J nT inf a∈ J nT |a − b| and ξ(J nT || J nT ) = sup b∈ J nT inf a∈J nT |a − b|, which quantify the under-segmentation error and over-segmentation error of the estimated change-point set J nT , respectively. When J nT = ∅ and J nT = ∅, we define ξ( J nT ||J nT ) = max τ ∈J nT τ and ξ(J nT || J nT ) = -. The performance of MNBS and CZ are summarized in Table 6, where we report the average number of estimated change-pointsĴ and the average Boysen distance ξ 1 = ξ( J nT ||J nT ) for under-segmentation error and ξ 2 = ξ(J nT || J nT ) for over-segmentation error over 100 runs. Real data analysis In this section, we apply MNBS and CZ to perform change-point detection for the MIT proximity network data. The data is collected through an experiment conducted by the MIT Media Laboratory during the 2004-2005 academic year [9], where 90 MIT students and staff were monitored by means of their smart phone. The Bluetooth data gives a measure of the proximity between two subjects and can be used as to construct a link between them. Based on the recorded time of the Bluetooth scan, we construct a daily-frequency dynamic network among 90 subjects by grouping the links per day. The network extracted based on the Bluetooth scan is relatively dense (see Figure 2 in the supplementary material). There are in total 348 days from 07/19/2004 to 07/14/2005. The recommended h is h = √ 348 = 18. For better interpretation, we set h = 14, which corresponds to 2 weeks. CZ detects 18 change-points while MNBS gives 10 change-points. The detailed result is reported in Table 2. The two algorithms give similar results for change-point locations. Notably CZ labels more change-points around the beginning and ending of the time period, which may be suspected as false positives. For robustness check, we rerun the analysis for MNBS and CZ with h = 7, which corresponds to 1 week. CZ detects 28 change-points while MNBS detects 11 change-points, which further indicates the robustness of MNBS (This result is reported in Section 7.4 of the supplementary material). In Figure 2 (top) of the supplementary material, we plot the sequence of scan statistics D(t, h) generated by MNBS, along with its h-local maximizers LM and estimated change-points J . Figure 2(bottom) plots the time series of total links of the dynamic network for illustration purposes, where MNBS is seen to provide an approximately piecewise constant segmentation for the series. Conclusion We propose a model-free and scalable multiple change-point detection procedure for dynamic networks by effectively utilizing the network structure for inference. Moreover, the proposed approach is proven to be consistent and delivers robust performance across various synthetic and real data settings. One can leverage the insights gained from our work for other learning tasks such as performing hypothesis tests on populations of networks. One potential weakness of MNBS-based change-point detection is that it is currently not adaptive to the sparsity of the network, as graphon estimation by MNBS is not adaptive to sparsity. We expect to be able to build the sparsity parameter ρ n into our procedure by assuming the graphon function f (x, y) = ρ n f 0 (x, y) with piecewise Lipschitz condition on f 0 . This potentially allows to include ρ n into the error bound of MNBS (with Frobenius norm normalized by ρ n ) and subsequently to adjust detection threshold (thus minimum detectable signal strength) depending on ρ n . Additional SBMs and graphons In addition to the seven SBMs (SBM-I to SBM-VII) defined in Section 5.1 of the main text, we further define four more SBMs (SBM-VIII to SBM-XI) and three graphons (Graphon-I to Graphon-III), which are later used for numerical study of tuning parameter calibration and for specifying additional dynamic networks for change-point analysis. Again, denote K B as the number of blocks/communities in an SBM, denote M (i) as the membership of the ith node and denote Λ as the connection probability matrix between different blocks. The three graphons are borrowed from [39]. [SBM-VIII] K B = 2, M (i) = I(1 ≤ i ≤ n 3/4 ) + 2I( n 3/4 + 1 ≤ i ≤ n), Λ = 0.6 0.3 0.3 0.6 . [SBM-IX] K B = 2, M (i) = I(1 ≤ i ≤ n 3/4 ) + 2I( n 3/4 + 1 ≤ i ≤ n), Λ = 0.6 − ∆ nT 0.3 0.3 0.6 . [SBM-X] K B = 2, M (i) = I(1 ≤ i ≤ n/2 ) + 2I( n/2 + 1 ≤ i ≤ n), Λ = 0.6 0.6 − ∆ nT 0.6 − ∆ nT 0.6 . [SBM-XI] K B = 2, M (i) = I(i is odd) + 2I(i is even), Λ = 0.6 0.6 − ∆ nT 0.6 − ∆ nT 0.6 . [Graphon-I] f (u, v) = k/(K B + 1) if (u, v) ∈ ((k − 1)/K B , k/K B ), f (u, v) = 0.3/(K B + 1) otherwise; K B = log n . [Graphon-II] f (u, v) = sin{5π(u + v − 1) + 1}/2 + 0.5. [Graphon-III] f (u, v) = (u 2 + v 2 )/3 cos{1/(u 2 + v 2 )} + 0.15. Calibration of tuning parameters In this section, we discuss and recommend the choices of tuning parameters for MNBS. To operationalize the MNBS-based detection procedure, we need to specify the neighborhood q = B 0 (log n) 1/2 /(n 1/2 h 1/2 ) and the threshold ∆ D = ∆ D (h, n) = D 0 (log n) 1/2+δ0 /(n 1/2 h 1/2 ). In total, there are four tuning parameters, B 0 for the neighborhood size, h for the local window size, and D 0 and δ 0 for the threshold size. By Theorem 4.1, to achieve consistent detection of true change-points, the minimum signal level ∆ * needs to be larger than the threshold ∆ D . Thus, in terms of minimum detectable signal, we prefer a larger h and a smaller δ 0 , so that asymptotically we can achieve a larger detectable region. On the other hand, to achieve a tighter confidence region of estimated change-point locations, we prefer a smaller h, and we prefer a larger δ 0 since it helps reduce false positives under small sample sizes. For finite sample, we recommend to set δ 0 = 0.1 and h = √ T , which makes the weakest detectable signal by MNBS to be of order O (log n) 0.6 n 1/2 T 1/4 . For the neighborhood size B 0 , [39] demonstrates that the performance of the neighborhood-based estimation is robust to the choice of B 0 in the range of [e −1 , e 2 ]. Following [39], we recommend to set B 0 = 1, 2 or 3. Note that the number of neighbors in MNBS is B 0 (n log n/h) 1/2 . To control the variance of MNBS, we suggest choosing a B 0 such that B 0 (n log n/h) 1/2 > 10. For all the following simulations, we set B 0 = 3 where B 0 = 1, 2 give similar numerical performance. To study the sensitivity of D 0 w.r.t. false positives under finite sample, we simulate dynamic networks {A (t) } T t=1 with no change-points from SBM-III,VIII,XI, and Graphons-I,II,III. We vary D 0 (thus the threshold ∆ D ) and examine the performance of MNBS. We vary the sample size at n = 100, 500, 1000 and T = 100, 500. For each combination of the sample size (T, n) and the network model, we conduct the simulation 100 times. Figure 1 reports the curves of average |Ĵ nT − J nT |, which is the difference between estimated number change-points and true number of change-points, versus D 0 under the six different network models. Note that J nT ≡ 0, thus |Ĵ nT − J nT | =Ĵ nT and the discrepancy represents the significance of false positive detection. As can be seen clearly, under various models and various sample sizes, the region with D 0 ≥ 0.25 controls the false positive reasonably well for MNBS. Thus, in practice we recommend setting D 0 = 0.25 for largest power. Note that out of the four stochastic block models (SBM-III,VIII,XI and Graphon-I), the most challenging case is SBM-VIII, which may be due to its imbalanced block size. Table 3. As can be seen, the performance of MNBS is reasonably well and improves as n increases, with some false positives when T = 500, n = 100, i.e. the large T and small n case. The reason is due to the relatively large variance of MNBS incurred by the small size of neighborhood. As for CZ, the empirical type-I error is roughly controlled at the target level 0.05 for most cases while experiencing some inflated levels under T = 500, n = 100. The numerical experiments demonstrate that MNBS provides robust and stable performance across a wide range of tuning parameters. To summarize, in practice, we recommend setting h = √ T , δ 0 = 0.1, B 0 ∈ {1, 2, 3} and D 0 ≥ 0.25. Additional results on synthetic networks In this section, we provide additional numerical experiments for comparing the performance of MNBS and CZ. Specifically, we design three additional dynamic stochastic block models (DSBM-IV to DSBM-VI) and conduct change-point detection analysis. Signal levels: The signal levels ∆ * of DSBM-I to DSBM-VI are summarized in Table 4. The signal levels of MDSBM-I and MDSBM-II are summarized in Table 5. Again, we define the normalized d 2,∞ norm as d 2,∞ (P, Q) = n −1/2 P − Q 2,∞ = max i n −1/2 P i· − Q i· 2 and the normalized Frobenius norm as d F (P, Q) = n −1 P − Q F . Note that in general, the signal level measured by Frobenius norm d F is of considerably smaller order than the one measured by d 2,∞ norm, indicating that d 2,∞ norm is more sensitive to changes. This phenomenon is especially significant for DSBM-III, where only one node switches membership after the change-point, making the change very challenging to detect in terms of Frobenius norm. P i − P i+1 , i = 1, . . . , J. τ 1 τ 2 τ 3 τ 4 MDSBM-I (d 2 2,∞ ) 0.3 2 1 3T 1/4 n 1/3 1 3T 1/4 n 1/3 - MDSBM-I (d 2 F ) 8 · 0.3 2 3T 1/4 n 1/3 2 9T 1/4 n 1/3 2 9T 1/4 n 1/3 - MDSBM-II (d 2 2,∞ ) 0.3 2 1 3T 1/4 n 1/3 1 3T 1/4 n 1/3 1 3T 1/4 n 1/3 MDSBM-II (d 2 F ) 8 · 0.3 2 3T 1/4 n 1/3 2 9T 1/4 n 1/3 2 9T 1/4 n 1/3 2 9T 1/4 n 1/3 Simulation setting and result: We set the sample size as n = 100, 500, 1000 and T = 100, 500. For each combination of sample size (T, n) and DSBMs, we conduct the simulation 100 times. The performance of MNBS and CZ for DSBM-IV to DSBM-VI are reported in Table 6, where we report the average number of estimated change-pointsĴ and the average Boysen distance ξ 1 = ξ( J nT ||J nT ) for under-segmentation error and ξ 2 = ξ(J nT || J nT ) for over-segmentation error. In general, both MNBS and CZ provide satisfactory performance for DSBM-IV to DSBM-VI, while MNBS offers superior performance with more accurate estimated number of change-pointsĴ and smaller Boysen distances ξ 1 , ξ 2 for both under and over-segmentation errors. Table 6: Average number of estimated change-pointsĴ and Boysen distances ξ 1 , ξ 2 by MNBS and CZ for DSBM-IV to DSBM-VI under single change-point scenarios. MNBS In this section, we describe another estimator for link probability matrix P based on repeated observations of a dynamic network over time via singular value thresholding. This is a modification of the universal singular value thresholding (USVT) procedure proposed by [6]. More specifically, as in the main text, assume DSBM-IV DSBM-V DSBM-VÎ J ξ 1 ξ 2Ĵ ξ 1 ξ 2Ĵ ξ 1 ξ 2 T = 100,A (t) (t = 1, . . . , T ) such that A (t) ij ∼ Bernoulli(P ij ) for i ≤ j, independently. LetĀ = T t=1 A (t) /T. In applying MUSVT to estimating P , major steps can be summarized as follows: 1. LetĀ = n i=1 s i u i u T i be the singular value decomposition of the average adjacent matrix A. Let S = {i : s i ≥ (2 + η) √ n √ T }, where η ∈ (0, 1) is some small positive number. Let Ā = i∈S s i u i u T i . 3. Let P = ( P ij ), where P ij :=      Ā ij , if 0 ≤ Ā ij ≤ 1 1, if Ā ij ≥ 1 0, if Ā ij ≤ 0. P serves as the final estimate for P . The key distinction between our estimate and the one in [6] is that we utilizeĀ, which allows us to lower the threshold level from an order of √ n to n/T . Theorem 8.1 quantifies the rate on P in approximating P . Theorem 8.1. Assume P arises from a graphon f that is piecewise Lipschitz as defined in Definition 3.1, then the following holds: P d F ( P , P ) 2 ≥ C(f, n, δ) 1 n 1/3 T 1/4 ≤ (n, T ), and (n, T ) → 0 as n, T → ∞, under the condition that T ≤ n 1−a for some constant a > 0, where d F (·, ·) stands for the normalized (by 1/n) Frobenius distance and C(f, n, δ) is a constant depending on f , n and δ. Proof. The key gradients in the proof are to bound the spectral norm betweenĀ and P as well as the nuclear norm of P . Specifically, By Lemma 3.5 in [6], one has P − P F ≤ K(δ) Ā − P P * 1/2 ,(4) By Theorem 3.4 in [6], one has under the conditions that T < n 1−a for some a > 0, P Ā − P ≥ (2 + η) n/T ≤ C 1 ( ) exp (−C 2 n/T ) .(5) where K(δ) = (4 + 2δ) 2/δ + √ 2 + δ, · F is the Frobineus norm, · stands for the spectral norm and · * is nuclear norm. The bound on P * is exactly the same as that in Theorem 2.7 in [6]. Only the term Ā − P affects the improved rate by a factor of 1/T 1/4 , resulting in the new rate of the theorem. The MUSVT procedure proposed above can be used as the initial graphon estimate in designing our change-point detection algorithm. We can prove consistency of the change-point estimation and obtain similar results as in Theorem 4.1 for MNBS, only requiring a slightly higher threshold level and stronger conditions on the minimal true signal strength. The difference in the rates of the two quantities are affected by the quality of the initial graphon estimates. Although the rates in Theorem 8.1 and the resulting requirements for consistency in change-point detection are not as good as those of MNBS, MUSVT enjoys some computational advantages when the number of nodes n is large, thus may still serve as an alternative practically. 9 Supplementary material: proof of theorems Proposition 9.1 (Bernstein inequality). Let X 1 , X 2 , . . . , X n be independent zero-mean random variables. Suppose that |X i | ≤ M a.s. for all i. Then for all positive t, we have P n i=1 X i > t ≤ exp − 1 2 t 2 n i=1 E(X 2 i ) + 1 3 M t . By Proposition 9.1, for a sequence of independent Bernoulli random variables where X i ∼ Bernoulli(p i ), we have P n i=1 (X i − p i ) > t ≤ 2 exp − 1 2 t 2 n i=1 p i (1 − p i ) + 1 3 t . In the following, we prove the theoretical properties of MNBS by first giving two Lemmas, which extends the result of Lemmas 1 and 2 in [39] to the case where T ≥ 1 repeated observations of a network is available. Denote I k = [x k−1 , x k ) for 1 ≤ k ≤ K − 1 and I K = [x K−1 , 1] for the intervals in Definition 3.1, and denote δ = min 1≤k≤K |I k |. For any ξ ∈ [0, 1], let I(ξ) denote the I k that contains ξ. Let S i (∆) = [ξ i − ∆, ξ i + ∆] ∩ I(ξ i ) denote the neighborhood of ξ i in which f (x, y) is Lipschitz in x ∈ S i (∆) for any fixed y. Lemma 9.2 (Neighborhood size). For any global constants C 1 > B 1 > 0, define ∆ n = C 1 log n n 1/2 ω . If n 1/2 ω · (C1−B1) 2 7C1−B1 > γ + 1, there existsC 1 > 0 such that for n large enough so that ∆ n < min k |I k |/2, we have P min i |{i = i : ξ i ∈ S i (∆ n )}| n − 1 ≥ B 1 log n n 1/2 ω ≥ 1 − 2n −(C1+γ) . Proof of Lemma 9.2. For any i, by the definition of S i (∆ n ), we know that ∆ n ≤ |S i (∆ n )| ≤ 2∆ n . By Bernstein inequality, we have P |{i = i : ξ i ∈ S i (∆ n )}| n − 1 − |S i (∆ n )| > n ≤ 2 exp − 1 2 (n − 1) 2 2 n (n − 1)2∆ n + 1 3 (n − 1) n ≤ 2 exp − 1 3 n 2 n 2∆ n + 1 3 n . Take a union bound over all i's gives P max i |{i = i : ξ i ∈ S i (∆ n )}| n − 1 − |S i (∆ n )| > n ≤ 2n exp − 1 3 n 2 n 2∆ n + 1 3 n . Let ∆ n = C 1 log n n 1/2 ω and n = C 2 log n n 1/2 ω with C 2 = C 1 − B 1 > 0, we have P max i |{i = i : ξ i ∈ S i (∆ n )}| n − 1 − |S i (∆ n )| > n ≤2n exp − 1 3 n 2 n 2∆ n + 1 3 n ≤ 2n exp − 1 3 nC 2 2 log n n 1/2 ω 2C 1 + 1 3 C 2 =2n 1− n 1/2 ω C 2 2 6C 1 +C 2 ≤ 2n −(C1+γ) , for someC 1 > 0 as long as n 1/2 ω · C 2 2 6C1+C2 = n 1/2 ω · (C1−B1) 2 7C1−B1 > 1 + γ. Thus, with probability 1 − 2n −(C1+γ) , we have min i |{i = i : ξ i ∈ S i (∆ n )}| n − 1 ≥ min i S i (∆ n ) − n ≥ ∆ n − n = (C 1 − C 2 ) log n n 1/2 ω = B 1 log n n 1/2 ω . This completes the proof of Lemma 9.2. · T ω 2 > 2 + γ and C3 2 · n 1/2 ω > 2 + γ hold, there existsC 2 > 0 such that if n is large enough so that (i) all conditions on n in Lemma 9.2 hold; (ii) B 1 n 1/2 log n ω ≥ 4, then the neighborhood N i has the following properties: 1. |N i | ≥ B 0 n 1/2 log n ω . 2. With probability 1 − 2n −(C1+γ) − 2n −(C2+γ) , for all i and i ∈ N i , we have P i· − P i · 2 2 /n ≤ (6LC 1 + 24C 3 ) log n n 1/2 ω . Proof of Lemma 9.3. The first claim follows immediately from the definition of quantile and q, since |N i | ≥ n · q = nB 0 log n n 1/2 ω = B 0 n 1/2 log n ω . To prove the second claim, we first give a concentration result. For any i, j such that i = j, we have (Ā 2 /n) ij − (P 2 /n) ij = k (Ā ikĀkj − P ik P kj ) /n ≤ k =i,j (Ā ikĀkj − P ik P kj ) n − 2 · n − 2 n + (Ā iiĀij − P ii P ij ) n + (Ā ijĀjj − P ij P jj ) n . We can easily show that Var(Ā ikĀkj ) = P 2 ik P kj (1 − P kj ) + P 2 kj P ik (1 − P ik ) T + P ik (1 − P ik )P kj (1 − P jk ) T 2 ≤ 1 T . Thus, by the independence amongĀ ikĀkj and Bernstein inequality, we have P   k =i,j (Ā ikĀkj − P ik P kj ) n − 2 ≥ n   ≤ 2 exp − 1 2 (n − 2) 2 2 n (n − 2) 1 T + 1 3 (n − 2) n ≤ 2 exp − 1 3 n 2 n 1 T + 1 3 n . Take a union bound over all i = j, we have P   max i,j:i =j k =i,j (Ā ikĀkj − P ik P kj ) n − 2 ≥ n   ≤ 2n 2 exp − 1 3 n 2 n 1 T + 1 3 n (6) ≤2n 2 max exp − 1 6 nT 2 n , exp − 1 2 n n . Let n = C 3 log n n 1/2 ω , we have 2n 2 exp − 1 6 nT 2 n = 2n 2 exp − 1 6 nT C 2 3 (log n) 2 nω 2 = 2n 2− C 2 3 log n 6 · T ω 2 ≤ 2n −(C2+γ) /3,2n 2 exp − 1 2 n n = 2n 2 exp − 1 2 nC 3 log n n 1/2 ω = 2n 2− C 3 n 1/2 2ω ≤ 2n −(C2+γ) /3, for someC 2 > 0 as long as C 2 3 log n 6 · T ω 2 > 2 + γ and C3 2 · n 1/2 ω > 2 + γ. Similarly, we have · n ω 2 · T > 2 + γ and C3 2 · n 1/2 ω > 2 + γ Thus, combine the above results, we have that with probability 1 − 2n −(C2+γ) , max i,j:i =j (Ā 2 /n) ij − (P 2 /n) ij ≤ 3 n = 3C 3 log n n 1/2 ω . P max i,j:i =j (Ā iiĀij − P ii P ij ) n > n ≤ 2n 2 exp − Following the same argument as [39], we have that for all i and anyĩ such that ξĩ ∈ S i (∆ n ), (P 2 /n) ik − (P 2 /n)ĩ k = | P i· , P k· − Pĩ · , P k· | /n ≤ P i· − Pĩ · 2 P k· 2 /n ≤ L∆ n , for all k = 1, . . . , n, where the last inequality follows from the piecewise Lipschitz condition of the graphon such that |Pĩ l − P il | = |f (ξĩ, ξ l ) − f (ξ i , ξ l )| ≤ L|ξĩ − ξ i | ≤ L∆ n for all l = 1, . . . , n, and from P k· 2 ≤ n 1/2 for all k. We now try to upper boundd(i, i ) for all i ∈ N i . We first boundd(i,ĩ) for allĩ with ξĩ ∈ S i (∆ n ) simultaneously. By above, we know that with probability 1 − 2n −(C2+γ) , we havẽ d(i,ĩ) = max k =i,ĩ (Ā 2 /n) ik − (Ā 2 /n)ĩ k ≤ max k =i,ĩ (P 2 /n) ik − (P 2 /n)ĩ k + 2 max i,j:i =j (Ā 2 /n) ij − (P 2 /n) ij ≤ L∆ n + 6C 3 log n n 1/2 ω , for all i and anyĩ such that ξĩ ∈ S i (∆ n ). By the above result and Lemma 9. We are now ready to complete the proof of the second claim of Lemma 9.3. With probability 1 − 2n −(C1+γ) − 2n −(C2+γ) , for n large enough such that min i |{i = i : ξ i ∈ S i (∆ n )}| ≥ B 1 n 1/2 log n ω ≥ 4 (by Lemma 9.2), we have that for all i and i ∈ N i , we can findĩ ∈ S i (∆ n ), i ∈ S i (∆ n ) such thatĩ, i,ĩ , i are different from each other and P i· − P i · 2 2 /n = (P 2 /n) ii − (P 2 /n) i i + (P 2 /n) i i − (P 2 /n) ii ≤ |(P 2 /n) ii − (P 2 /n) i i | + |(P 2 /n) i i − (P 2 /n) ii | ≤ |(P 2 /n) iĩ − (P 2 /n) i ĩ | + |(P 2 /n) i ĩ − (P 2 /n) iĩ | + 4L∆ n ≤ |(Ā 2 /n) iĩ − (Ā 2 /n) i ĩ | + |(Ā 2 /n) i ĩ − (Ā 2 /n) iĩ | + 4L∆ n + 12C 3 log n n 1/2 ω ≤ 2 max k =i,i |(Ā 2 /n) ik − (Ā 2 /n) i k | + 4L∆ n + 12C 3 log n n 1/2 ω = 2d(i, i ) + 4L∆ n + 12C 3 log n n 1/2 ω ≤ 6L∆ n + 24C 3 log n n 1/2 ω = (6LC 1 + 24C 3 ) log n n 1/2 ω . This completes the proof of Lemma 9.3. Based on Lemma 9.2 and 9.3, we are now ready to prove Theorem 3.2, which provides the error bound for MNBS. Proof of Theorem 3.2. To prove Theorem 3.2, it suffices to show that with high probability, the following holds for all i. 1 n j (P ij − P ij ) 2 ≤ C · log n n 1/2 ω . We first perform a bias-variance decomposition such that 2. Lemma 9.3: C 2 3 log n 6 · T ω 2 > 2 + γ and C3 2 · n 1/2 ω > 2 + γ and B 1 ≥ B 0 ; 3. C 2 4 (log n) 3 4 · n ω 2 · T 2 ω 2 > (1 + γ) and 3C4 log n 4 · n ω 2 > 1 + γ; C 2 5 log n 6 · T 2 ω 2 > 2 + γ and C5 2 · n 1/2 ω > 2 + γ; ω 2 T (log n) 2 ≤ 1. It is easy to see that, for any γ > 0 and B 0 > 0, we can always find B 1 , C 1 , C 3 , C 4 , C 5 such that all inequalities in (1)-(3) hold for all n large enough as long as ω ≤ min(n 1/2 , (T log n) 1/2 ). Take ω = min(n 1/2 , (T log n) 1/2 ), this completes the proof of Theorem 3.2. The key observation is that for both an h-flat point and an true change-point, the adjacency matrices {A (i) } t i=t−h+1 or {A (i) } t+h i=t+1 that are used in the estimation ofP t1,h orP t2,h are generated by the same probability matrix P and thus the result of Theorem 3.2 can be directly applied. By assumption we have (h log n) 1/2 < n 1/2 , thus ω = min(n 1/2 , (h log n) 1/2 ) = (h log n) 1/2 . Thus by Theorem 3.2, for any t that is an h-flat point, we have P (D(t, h) > ∆ D ) = P (d 2,∞ (P t1,h ,P t2,h ) 2 > ∆ D ) ≤P (d 2,∞ (P t1,h ,P t1,h ) 2 + d 2,∞ (P t2,h ,P t2,h ) 2 > ∆ D /2) ≤P (d 2,∞ (P t1,h ,P t1,h ) 2 > ∆ D /4) + P (d 2,∞ (P t2,h ,P t2,h ) 2 > ∆ D /4) ≤ 2n −γ , where the second to last inequality uses the fact thatP t1,h =P t2,h for an h-flat point, and the last inequality follows from Theorem 3.2 and the fact that ∆ D /(C(log n) 1/2 /(n 1/2 h 1/2 )) → ∞ for any C > 0. For any t that is a true change-point, we have P (D(t, h) > ∆ D ) = P (d 2,∞ (P t1,h ,P t2,h ) 2 > ∆ D ) ≥P (d 2,∞ (P t1,h ,P t2,h ) − d 2,∞ (P t1,h ,P t1,h ) − d 2,∞ (P t2,h ,P t2,h ) > ∆ D ) =P (d 2,∞ (P t1,h ,P t1,h ) + d 2,∞ (P t2,h ,P t2,h ) < d 2,∞ (P t1,h ,P t2,h ) − ∆ D ) ≥P (d 2,∞ (P t1,h ,P t1,h ) + d 2,∞ (P t2,h ,P t2,h ) < ∆ D ( ∆ * /∆ D − 1)) ≥1 − P (d 2,∞ (P t1,h ,P t1,h ) 2 > ∆ D ( ∆ * /∆ D − 1) 2 /4) − P (d 2,∞ (P t2,h ,P t2,h ) 2 > ∆ D ( ∆ * /∆ D − 1) 2 /4) ≥ 1 − 2n −γ , where the second inequality uses the fact that d 2,∞ (P t1,h ,P t2,h ) ≥ √ ∆ * for a true change-point, and the last inequality follows from Theorem 3.2 and the fact that ∆ D /(C(log n) 1/2 /(n 1/2 h 1/2 )) → ∞ for any C > 0. Let F h be the set of all flat points t and J be the set of all true change-points. Consider the event A τ = {D(τ, h) > ∆ D } for true change-points τ ∈ J and the event B t = {D(t, h) < ∆ D } for flat points t ∈ F h . Define the event ξ n = τ ∈J A τ t∈F h B t . By the above result, we have that P (ξ n ) = 1 − P (ξ c n ) ≥ 1 − P τ ∈J A c τ − P t∈F h B c t ≥ 1 − 2T n −γ → 1, as long as T n −γ → 0. We now prove that ξ n implies the event {Ĵ = J} ∩ {J ⊂:Ĵ ± h}. Under ξ n , no flat point will be selected at the thresholding steps. Thus, for any pointτ ∈ J , there is at least one change-point in its neighborhood {τ − h + 1, . . . ,τ + h − 1}. On the other hand, by assumption h < D * /2, thus, there exists at most one change-point in {τ − h + 1, . . . ,τ + h − 1}. Together, it implies that there is exactly one change-point in {τ − h + 1, . . . ,τ + h − 1} for eachτ ∈Ĵ . Meanwhile, under ξ n , for every true change-point τ ∈ J , we have D(τ, h) > ∆ D . Note that τ − h and τ + h are h-flat points since h < D * /2, thus max(D(τ + h, h), D(τ − h, h)) < ∆ D . Thus, for every true change-point τ, there exists a local maximizer, sayτ , which is in {τ −h+1, . . . , τ +h−1} with D(τ , h) ≥ D(τ, h) > ∆ D . Combining the above result, we have that P {Ĵ = J} ∩ {J ⊂:Ĵ ± h} → 1.
9,660
1908.01823
2965547769
We propose a general approach for change-point detection in dynamic networks. The proposed method is model-free and covers a wide range of dynamic networks. The key idea behind our approach is to effectively utilize the network structure in designing change-point detection algorithms. This is done via an initial step of graphon estimation, where we propose a modified neighborhood smoothing (MNBS) algorithm for estimating the link probability matrices of a dynamic network. Based on the initial graphon estimation, we then develop a screening and thresholding algorithm for multiple change-point detection in dynamic networks. The convergence rate and consistency for the change-point detection procedure are derived as well as those for MNBS. When the number of nodes is large (e.g., exceeds the number of temporal points), our approach yields a faster convergence rate in detecting change-points comparing with an algorithm that simply employs averaged information of the dynamic network across time. Numerical experiments demonstrate robust performance of the proposed algorithm for change-point detection under various types of dynamic networks, and superior performance over existing methods is observed. A real data example is provided to illustrate the effectiveness and practical impact of the procedure.
Another related area of research is anomaly detection in dynamic networks, where the task is to detect short abrupt deviation of the network behavior from its norm. This is not the focus of our paper and we refer the readers to @cite_1 for a comprehensive survey.
{ "abstract": [ "Anomaly detection is an important problem with multiple applications, and thus has been studied for decades in various research domains. In the past decade there has been a growing interest in anomaly detection in data represented as networks, or graphs, largely because of their robust expressiveness and their natural ability to represent complex relationships. Originally, techniques focused on anomaly detection in static graphs, which do not change and are capable of representing only a single snapshot of data. As real-world networks are constantly changing, there has been a shift in focus to dynamic graphs, which evolve over time." ], "cite_N": [ "@cite_1" ], "mid": [ "2093168265" ] }
Change-point detection in dynamic networks via graphon estimation
The last few decades have witnessed rapid advancement in models, computational algorithms and theories for inference of networks. This is largely motivated by the increasing prevalence of network data in diverse fields of science, engineering and society, and the need to extract meaningful scientific information out of these network data. In particular, the emerged field of statistical network analysis has spurred development of many statistical models such as latent space model [28], stochastic block model and their variants [12,13,2,30], and associated algorithms [36,23,17,1] for various inference tasks including link prediction, community detection and so on. However, the existing literature has been mostly focused on the analysis of one (and often large) network. While inference of single network remains to be an important research area due to its abundant applications in social network analysis, computational biology and other fields, there is emerging need to be able to analyze a collection of multiple network objects [11,14,22,4], with one notable example being temporal or dynamic networks. For example, it has become standard practice in many areas of neuroscience (e.g., neuro-imaging) to use networks to represent various notions of connectivity among regions of interest (ROI) in the brain observed in a temporal fashion when the subjects are engaged in some learning tasks. Analysis of such data demands development of new network models and tools, and leads to a growing literature on inference of dynamic networks, see e.g., [32,26,31]. Our work focuses on change-point detection in dynamic networks, which is an important yet less studied aspect of learning dynamic networks. The key insight of our proposed approach is to effectively utilize the network structure for efficient change-point detection in dynamic networks. This is done by first performing graphon estimation (i.e. link probability matrix estimation) for the dynamic network, which then serves as basis of a screening and thresholding procedure for change-point detection. For graphon estimation in dynamic networks, we propose a novel modified neighborhood smoothing (MNBS) algorithm, where a faster convergence rate is achieved via simultaneous utilization of the network structure and repeated observations of dynamic networks across time. Most existing literature on change-point detection in dynamic networks [e.g., 25,18,35,8] rely on specific model assumptions and only provide computational algorithms without theoretical justifications. In contrast, our method is nonparametric/model-free and thus can be applied to a wide range of dynamic networks. Moreover, we thoroughly study the consistency and convergence properties of our change-point detection procedure and provide its theoretical guarantee under a formal statistical framework. Numerical experiments on both synthetic and real data are conducted to further demonstrate the robust and superior performance of the proposed method. The paper is organized as follows. Section 2 discusses related work. Section 3 proposes an efficient graphon (link probability matrix) estimation method -MNBS, and Section 4 introduces our changepoint detection procedure. Numerical study on synthetic and real networks is carried out in Section 5. Our work concludes with a discussion. Technical proofs, additional numerical studies and suggestions of an additional graphon estimator can be found in the supplementary material. Modified neighborhood smoothing for dynamic networks In this section, we propose a neighborhood smoothing based estimator for link probability matrix estimation given repeated observations of an undirected network. This later serves as the basis for the proposed algorithm of multiple change-point detection for dynamic networks in Section 4. The basic setting is as follows. Given a network with a link probability matrix P , assume one observes independent (symmetric) adjacency matrices A (t) (t = 1, . . . , T ) such that A (t) ij ∼ Bernoulli(P ij ) for i ≤ j, independently. Based on the repeated observations {A (t) } T t=1 , we want to estimate the link probability matrix P . Note that when T = 1, this reduces to the classical problem of link probability matrix estimation for a network (e.g., [6], [10] and [39]). In particular, the neighborhood smoothing (NBS) proposed by [39] is a computationally feasible algorithm which enjoys competitive error rate and is demonstrated to work well for real networks. Motivated by NBS, we propose a modified neighborhood smoothing (MNBS) algorithm, which incorporates the repeated observations of the network across time and thus further improves the estimation accuracy of NBS. LetĀ = T t=1 A (t) /T, we define the distance measure between node i and i as in [39] such that d 2 (i, i ) = max k =i,i | Ā i· −Ā i · ,Ā k· |, where A i· denotes the ith row of A and · · denotes the inner product of two vectors. Based on the distance metric, define the neighborhood of node i as N i = i = i :d(i, i ) ≤ q i (q) ,(1) where q i (q) denotes the qth quantile of the distance set d (i, i ) : i = i . Given neighborhood N i for each node i, we define the modified neighborhood smoothing (MNBS) estimator as P ij = i ∈NiĀ i j |N i | .(2) Note that q is a tuning parameter and affects the performance of MNBS via a bias-variance trade-off. In [39], where T = 1, the authors set q = C(log n/n) 1/2 for some constant C > 0. Thus, for each node i, the size of its neighborhood |N i | is roughly C(n log n) 1/2 . For MNBS, we set q = C log n/(n 1/2 ω), where ω = min(n 1/2 , (T log n) 1/2 ). When T = 1, MNBS reduces to NBS. For T > 1, we have log n/(n 1/2 ω) < (log n/n) 1/2 and thus MNBS estimates P by smoothing over a smaller neighborhood. From a bias-variance trade-off point of view, the intuition behind this modification is that we can shrink the size of N i to reduce the bias ofP ij introduced by neighborhood smoothing while the increased variance ofP ij due to the shrunken neighborhood can be compensated by the extra averaged information brought by {A (t) } T t=1 across time. We proceed with studying theoretical properties of MNBS. We assume the link probability matrix P is generated by a graphon f : [0, 1] 2 × N → [0, 1] such that f (x, y) = f (y, x) and P ij = f (ξ i , ξ j ), for i, j = 1, . . . , n, and ξ i i.i.d. ∼ Uniform[0, 1]. As in [39], we study properties of MNBS for a piecewise Lipschitz graphon family, where the behavior of the graphon function f (x, y) is regulated in the following sense. x 0 < · · · < x K = 1 satisfying min 0≤s≤K−1 (x s+1 − x s ) > δ, and (ii) both |f (u 1 , v) − f (u 2 , v)| ≤ L|u 1 − u 2 | and |f (u, v 1 ) − f (u, v 2 )| ≤ L|v 1 − v 2 | hold for all u, u 1 , u 2 ∈ [x s , x s+1 ), v, v 1 , v 2 ∈ [x t , x t+1 ) and 0 ≤ s, t ≤ K − 1. As is illustrated by [39], this graphon-based theoretical framework is a general model-free scheme that covers a wide range of exchangeable networks such as the commonly used Erdös-Rényi model and stochastic block model. See more detailed discussion about Definition 3.1 in [39]. For any P, Q ∈ R n×n , define d 2,∞ , the normalized 2, ∞ matrix norm, by d 2,∞ (P, Q) = n −1/2 P − Q 2,∞ = max i n −1/2 P i· − Q i· 2 . We have the following error rate bound for MNBS. The sample size T is implicitly taken as a function of n and all limits are taken over n → ∞. Theorem 3.2 (Consistency of MNBS). Assume L is a global constant and δ = δ(n, T ) depends on n, T satisfying lim n→∞ δ/(n −1/2 ω −1 log n) → ∞ where ω = ω(n, T ) = min(n 1/2 , (T log n) 1/2 ), then the estimatorP defined in (2), with neighborhood N i defined in (1) and q = B 0 log n/(n 1/2 ω) for any global constant B 0 > 0 satisfies max f ∈F δ;L P d 2,∞ (P , P ) 2 ≥ C log n n 1/2 ω ≤ n −γ , for any γ > 0, where C is a positive global constant depending on B 0 and γ. It is easy to see that the error rate in Theorem 3.2 also holds for the normalized Frobenius norm d F (P , P ) = n −1 P − P F . For T = 1, ω = (log n) 1/2 and MNBS recovers the error rate (log n/n) 1/2 of NBS in [39]. For n > T , which is the realistic case for repeated temporal observations of a large dynamic network, we can set ω = (T log n) 1/2 for MNBS and thus have max f ∈F δ;L P d 2,∞ (P , P ) 2 ≥ C log n nT 1/2 ≤ n −γ . In other words, the network structure among n nodes and the repeated observations along time dimension T both help achieve better estimation accuracy for MNBS. In contrast, it is easy to see that a simply averagedĀ across time T cannot achieve improved performance when n increases. Remark 3.3. Another popular estimator of P , which is more scalable for large networks, is the USVT (Universal Singular Value Thresholding) proposed by [6]. In Section 8 of the supplementary material, we propose a modified USVT for dynamic networks by carefully lowering the singular value thresholding level forĀ. However, the convergence rate of the modified USVT is shown to be slower than MNBS. Thus we do not pursue USVT-based change-point detection here. MNBS-based multiple change-point detection In this section, we propose an efficient multiple change-point detection procedure for dynamic networks, which is built upon MNBS. We assume the observed dynamic network {A (t) } T t=1 are generated by a sequence of probability matrices {P (t) } T t=1 with A (t) ij ∼ Bernoulli(P (t) ij ) for t = 1, . . . , T . We are interested in testing the existence and further estimating the locations of potential change-points where P (t) = P (t+1) . More specifically, we assume there exist J (J ≥ 0) unknown change-points τ 0 ≡ 0 < τ 1 < τ 2 < . . . < τ J < T ≡ τ J+1 such that P (t) = P j , for t = τ j−1 + 1, . . . , τ j , and j = 1, . . . , J + 1. In other words, we assume there exist J + 1 non-overlapping segments of (1, . . . , T ) where the dynamic network follows the same link probability matrix on each segment and P j is the link probability matrix of the jth segment satisfying P j = P j+1 . Denote J = {τ 1 < τ 2 < . . . < τ J } as the set of true change-points and define J = ∅ if J = 0. Note that the number of change-points J is allowed to grow with the sample size (n, T ). A screening and thresholding change point detection algorithm For efficient and scalable computation, we adapt a screening and thresholding algorithm that is commonly used in change-point detection for time series, see, e.g. [16], [24], [40] and [38]. The MNBS-based detection procedure works as follows. Screening: Set a screening window of size h T . For each t = h, . . . , T − h, we calculate a local window based statistic D(t, h) = d 2,∞ (P t1,h ,P t2,h ) 2 , whereP t1,h andP t2,h are the estimated link probability matrices based on observed adjacency matrices {A (i) } t i=t−h+1 and {A (i) } t+h i=t+1 respectively by MNBS. The local window size h T is a tuning parameter. In the following, we only consider the case where (h log n) 1/2 ≤ n 1/2 , which is the most likely scenario for real data applications and thus is more interesting. Therefore, for MNBS, we can set ω = min(n 1/2 , (h log n) 1/2 ) = (h log n) 1/2 and q = B 0 (log n) 1/2 /(n 1/2 h 1/2 ). The result for (h log n) 1/2 > n 1/2 can be derived accordingly. Intuitively, D(t, h) measures the difference of the link probability matrices within a small neighborhood of size h before and after t, where a large D(t, h) signals a high chance of being a change-point. We call a time point x an h-local maximizer of the function D(t, h) if D(x, h) ≥ D(t, h), for all t = x − h + 1, . . . , x + h − 1. Thresholding: Let LM denote the set of all h-local maximizers of the function D(t, h), we estimate the change-points by applying a thresholding rule to LM such that J = {t|t ∈ LM and D(t, h) > ∆ D }, whereĴ is the set of estimated change-points,Ĵ = Card(Ĵ ) and ∆ D is the threshold taking the form ∆ D = ∆ D (h, n) = D 0 (log n) 1/2+δ0 n 1/2 h 1/2 , for some constants D 0 > 0 and δ 0 > 0. Note that ∆ D dominates the asymptotic order of the MNBS estimation error C(log n) 1/2 /(n 1/2 h 1/2 ) ofP t1,h andP t2,h quantified by Theorem 3.2. The proposed algorithm is scalable and can readily handle change-point detection in large-scale dynamic networks as MNBS can be easily parallelized over n nodes and the screening procedure is parallelizable over time t = h, . . . , T − h. Theoretical guarantee of the change-point detection procedure We first define several key quantities that are used for studying theoretical properties of the MNBSbased change-point detection procedure. Define ∆ j = d 2,∞ (P j , P j+1 ) 2 for j = 1, . . . , J and ∆ * = min 1≤j≤J ∆ j , which is the minimum signal level in terms of d 2,∞ norm. Also, define D * = min 1≤j≤J+1 (τ j − τ j−1 ), which is the minimum segment length. We assume for each segment j = 1, . . . , J + 1, its link probability matrix P j is generated by a piecewise Lipschitz graphon f j ∈ F δ;L as in Definition 3.1, where common constants (δ, L) are shared across segments. Note that J = J (n, T ), J = J(n, T ), ∆ * = ∆ * (n, T ), D * = D * (n, T ) and δ = δ(n, T ) are functions of (n, T ) implicitly. We have the following consistency result. . Assume there exists some γ > 0 such that T /n γ → 0. Assume L is a global constant and assume δ = δ(n, T ), h = h(n, T ) and D * = D * (n, T ) depend on n, T satisfying h < D * /2 and lim n→∞ δ/(n −1 h −1 log n) 1/2 → ∞. If assume further that the minimum signal level ∆ * = ∆ * (n, T ) exceeds the detection threshold ∆ D = D 0 (log n) 1/2+δ0 /(n 1/2 h 1/2 ), i.e. lim n→∞ ∆ * /∆ D > 1, then the MNBS-based change-point detection procedure with q = B 0 (log n) 1/2 /(n 1/2 h 1/2 ) satisfies lim n→∞ P {Ĵ = J} ∩ {J ⊂:Ĵ ± h} = 1, for any constants B 0 , D 0 , δ 0 > 0, where J ⊂:Ĵ ± h means τ j ∈ {τ j − h + 1, . . . ,τ j + h − 1} for j = 1, . . . , J. In particular, Theorem 4.1 gives a sure coverage property ofĴ in the sense that the true change-point set J is asymptotically covered byĴ ± h such that max j=1,...,J |τ j − τ j | < h. If h/T → 0, Theorem 4.1 impliesĴ = J and max j=1,...,J |τ j /T − τ j /T | < h/T → 0 in probability, which further implies consistency of the relative locations of the estimated change-pointsĴ . A remarkable phenomenon occurs if the minimum signal level ∆ * is strong enough such that ∆ * > (log n) 1/2+δ0 /n 1/2 . Under such situation, we can set h = 1 and by the sure coverage property in Theorem 4.1, the proposed algorithm recovers the exact location of the true change-points J without any error. This is in sharp contrast to the classical result for change-point detection under time series settings [37], where the optimal error rate for estimated change-point location is shown to be O p (1). This unique property is due to the fact that MNBS provides accurate estimation of the link probability matrix by utilizing the network structure within each network via smoothing. Remark 4.2 (Choice of tuning parameters). There are four tuning parameters of the MNBS-based detection algorithm: local window size h, neighborhood size B 0 and threshold size (D 0 , δ 0 ). For the window size h, a smaller h gives a better convergence rate of change-point estimation and is more likely to satisfy the constraint that h < D * /2. On the other hand, a smaller h puts a higher requirement on the detectable signal level ∆ * since we require ∆ * /∆ D > 1 with a smaller h implying a larger threshold ∆ D = D 0 (log n) 1/2+δ0 /(n 1/2 h 1/2 ). The only essential requirement on h is that h < D * /2, i.e., the local window should not cover two true change-points at the same time. In practice, as long as a lower bound of D * is known, h can be specified accordingly. For most applications, we recommend setting h = √ T . For the choice of B 0 and (D 0 , δ 0 ), note that Theorem 4.1 holds for any B 0 , D 0 , δ 0 > 0. Thus the choice of B 0 and (D 0 , δ 0 ) is more of a practical matter and specific recommendations are provided in Section 7.2 of the supplementary material, where MNBS is found to give robust performance across a wide range of tuning parameters. Remark 4.3 (Separation measure of signals). To our best knowledge, all existing literature that study change-points of dynamic networks use Frobenius norm as the separation measure between two link probability matrices. We instead use d 2,∞ norm, which in general gives much weaker condition than the one using Frobenius norm. See examples in Section 5.1. Numerical studies In this section, we conduct numerical experiments to examine the performance of the MNBSbased change-point detection algorithm for dynamic networks. For comparison, the graph-based nonparametric testing procedure in [7] is also implemented (via R-package gSeg provided by [7]) with type-I error α = 0.05. We refer to the two detection algorithms as MNBS and CZ. To operationalize MNBS, we need to specify the neighborhood q = B 0 (log n) 1/2 /(n 1/2 h 1/2 ) and the threshold ∆ D = D 0 (log n) 1/2+δ0 /(n 1/2 h 1/2 ). In total, there are four tuning parameters, h for the local window size, B 0 for the neighborhood size, and D 0 and δ 0 for the threshold size. In Section 7.2 of the supplementary material, we conduct extensive numerical experiments and provide detailed recommendations for calibration of the tuning parameters. In short, MNBS provides robust and stable performance across a wide range of tuning parameters. We refer readers to Section 7.2 of the supplementary material for detailed study of the tuning parameters. In the following, we recommend setting h = √ T , B 0 = 3, δ 0 = 0.1 and D 0 = 0.25. Performance on synthetic networks In this section, we compare the performance of MNBS and CZ under various synthetic dynamic networks that contain single or multiple change-points. We first define seven different stochastic block models (SBM-I to SBM-VII), which we then use to build various dynamic networks that exhibit different types of change behavior. Denote K B as the number of blocks in an SBM, denote M (i) as the membership of the ith node and denote Λ as the connection probability matrix between different blocks. Define M 1 (i) = I(1 ≤ i ≤ n/3 ) + 2I( n/3 + 1 ≤ i ≤ 2 n/3 ) + 3I(2 n/3 + 1 ≤ i ≤ n), where I(·) denotes the indicator function and x denotes the integer part of x. Define Λ1 =   0.6 0.6 − ∆nT 0.3 0.6 − ∆nT 0.6 0.3 0.3 0.3 0.6   , Λ2 =   0.6 + ∆nT 0.6 0.3 0.6 0.6 + ∆nT 0.3 0.3 0.3 0.6   , Λ3 = 0.6 0.3 0.3 0.6 , Λ4 =   0.6 + ∆nT 0.6 − ∆nT 0.3 0.6 − ∆nT 0.6 + ∆nT 0.3 0.3 0.3 0.6   , Λ5 = 0.6 0.6 − ∆nT 0.6 − ∆nT 0.6 . The seven SBMs are defined as: [SBM-I] K B = 2, M (i) = I(1 ≤ i ≤ 2 n/3 ) + 2I(2 n/3 + 1 ≤ i ≤ n), Λ = Λ 3 . [SBM-II] K B = 2, M (i) = I(1 ≤ i ≤ 2 n(1 − ∆ nT )/3 ) + 2I(2 n(1 − ∆ nT )/3 + 1 ≤ i ≤ n), Λ = Λ 3 . [SBM-III] K B = 3, M (i) = M 1 (i), Λ = Λ 1 (∆ nT ). [SBM-IV] K B = 3, M (i) = M 1 (i), Λ = Λ 2 (∆ nT ). [SBM-V] K B = 3, M (i) = M 1 (i), Λ = Λ 4 (∆ nT ). [SBM-VI] K B = 2, M (i) = I(1 ≤ i ≤ 2 n/3 ) + 2I(2 n/3 + 1 ≤ i ≤ n), Λ = Λ 5 (∆ nT ). [SBM-VII] K B = 2, M (i) = I(1 ≤ i ≤ 2 n/3 − 1) + 2I(2 n/3 ≤ i ≤ n), Λ = Λ 5 (∆ nT ). Dynamic networks with change-points: Based on SBM-I to SBM-VII, we design five dynamic stochastic block models (DSBM) with single or multiple change-points. The signal level ∆ * of each DSBM/MDSBM is controlled by ∆ nT and decreases as sample size (T, n) grows. The detailed signal level is summarized in Tables 4 and 5 of the supplementary material. Note that the signal level measured by (normalized) Frobenius norm d F can be at a considerably smaller order than the one measured by (normalized) d 2,∞ norm, indicating d 2,∞ norm is more sensitive to changes. This phenomenon is especially significant for DSBM-III, where d 2 2,∞ (P 1 , P 2 ) = 1/(n 1/3 T 1/4 ) and d 2 F (P 1 , P 2 ) = 2/(n 4/3 T 1/4 ), since only one node switches membership after the change-point, making the change very challenging to detect in terms of Frobenius norm. Simulation result: We vary n = 100, 500, 1000 and T = 100, 500. For each combination of sample size (T, n) and DSBM/MDSBM, we conduct the simulation 100 times. Note that for multiple change-point scenarios, we conduct change-point analysis on MDSBM-I when T = 100 and perform the analysis on MSDBM-II when T = 500. To assess the accuracy of change-point estimation, we use the Boysen distance as suggested in [5] and [40]. Specifically, denote J nT as the estimated change-point set and J nT as the true change-point set, we calculate the distance between J nT and J nT via ξ( J nT ||J nT ) = sup b∈J nT inf a∈ J nT |a − b| and ξ(J nT || J nT ) = sup b∈ J nT inf a∈J nT |a − b|, which quantify the under-segmentation error and over-segmentation error of the estimated change-point set J nT , respectively. When J nT = ∅ and J nT = ∅, we define ξ( J nT ||J nT ) = max τ ∈J nT τ and ξ(J nT || J nT ) = -. The performance of MNBS and CZ are summarized in Table 6, where we report the average number of estimated change-pointsĴ and the average Boysen distance ξ 1 = ξ( J nT ||J nT ) for under-segmentation error and ξ 2 = ξ(J nT || J nT ) for over-segmentation error over 100 runs. Real data analysis In this section, we apply MNBS and CZ to perform change-point detection for the MIT proximity network data. The data is collected through an experiment conducted by the MIT Media Laboratory during the 2004-2005 academic year [9], where 90 MIT students and staff were monitored by means of their smart phone. The Bluetooth data gives a measure of the proximity between two subjects and can be used as to construct a link between them. Based on the recorded time of the Bluetooth scan, we construct a daily-frequency dynamic network among 90 subjects by grouping the links per day. The network extracted based on the Bluetooth scan is relatively dense (see Figure 2 in the supplementary material). There are in total 348 days from 07/19/2004 to 07/14/2005. The recommended h is h = √ 348 = 18. For better interpretation, we set h = 14, which corresponds to 2 weeks. CZ detects 18 change-points while MNBS gives 10 change-points. The detailed result is reported in Table 2. The two algorithms give similar results for change-point locations. Notably CZ labels more change-points around the beginning and ending of the time period, which may be suspected as false positives. For robustness check, we rerun the analysis for MNBS and CZ with h = 7, which corresponds to 1 week. CZ detects 28 change-points while MNBS detects 11 change-points, which further indicates the robustness of MNBS (This result is reported in Section 7.4 of the supplementary material). In Figure 2 (top) of the supplementary material, we plot the sequence of scan statistics D(t, h) generated by MNBS, along with its h-local maximizers LM and estimated change-points J . Figure 2(bottom) plots the time series of total links of the dynamic network for illustration purposes, where MNBS is seen to provide an approximately piecewise constant segmentation for the series. Conclusion We propose a model-free and scalable multiple change-point detection procedure for dynamic networks by effectively utilizing the network structure for inference. Moreover, the proposed approach is proven to be consistent and delivers robust performance across various synthetic and real data settings. One can leverage the insights gained from our work for other learning tasks such as performing hypothesis tests on populations of networks. One potential weakness of MNBS-based change-point detection is that it is currently not adaptive to the sparsity of the network, as graphon estimation by MNBS is not adaptive to sparsity. We expect to be able to build the sparsity parameter ρ n into our procedure by assuming the graphon function f (x, y) = ρ n f 0 (x, y) with piecewise Lipschitz condition on f 0 . This potentially allows to include ρ n into the error bound of MNBS (with Frobenius norm normalized by ρ n ) and subsequently to adjust detection threshold (thus minimum detectable signal strength) depending on ρ n . Additional SBMs and graphons In addition to the seven SBMs (SBM-I to SBM-VII) defined in Section 5.1 of the main text, we further define four more SBMs (SBM-VIII to SBM-XI) and three graphons (Graphon-I to Graphon-III), which are later used for numerical study of tuning parameter calibration and for specifying additional dynamic networks for change-point analysis. Again, denote K B as the number of blocks/communities in an SBM, denote M (i) as the membership of the ith node and denote Λ as the connection probability matrix between different blocks. The three graphons are borrowed from [39]. [SBM-VIII] K B = 2, M (i) = I(1 ≤ i ≤ n 3/4 ) + 2I( n 3/4 + 1 ≤ i ≤ n), Λ = 0.6 0.3 0.3 0.6 . [SBM-IX] K B = 2, M (i) = I(1 ≤ i ≤ n 3/4 ) + 2I( n 3/4 + 1 ≤ i ≤ n), Λ = 0.6 − ∆ nT 0.3 0.3 0.6 . [SBM-X] K B = 2, M (i) = I(1 ≤ i ≤ n/2 ) + 2I( n/2 + 1 ≤ i ≤ n), Λ = 0.6 0.6 − ∆ nT 0.6 − ∆ nT 0.6 . [SBM-XI] K B = 2, M (i) = I(i is odd) + 2I(i is even), Λ = 0.6 0.6 − ∆ nT 0.6 − ∆ nT 0.6 . [Graphon-I] f (u, v) = k/(K B + 1) if (u, v) ∈ ((k − 1)/K B , k/K B ), f (u, v) = 0.3/(K B + 1) otherwise; K B = log n . [Graphon-II] f (u, v) = sin{5π(u + v − 1) + 1}/2 + 0.5. [Graphon-III] f (u, v) = (u 2 + v 2 )/3 cos{1/(u 2 + v 2 )} + 0.15. Calibration of tuning parameters In this section, we discuss and recommend the choices of tuning parameters for MNBS. To operationalize the MNBS-based detection procedure, we need to specify the neighborhood q = B 0 (log n) 1/2 /(n 1/2 h 1/2 ) and the threshold ∆ D = ∆ D (h, n) = D 0 (log n) 1/2+δ0 /(n 1/2 h 1/2 ). In total, there are four tuning parameters, B 0 for the neighborhood size, h for the local window size, and D 0 and δ 0 for the threshold size. By Theorem 4.1, to achieve consistent detection of true change-points, the minimum signal level ∆ * needs to be larger than the threshold ∆ D . Thus, in terms of minimum detectable signal, we prefer a larger h and a smaller δ 0 , so that asymptotically we can achieve a larger detectable region. On the other hand, to achieve a tighter confidence region of estimated change-point locations, we prefer a smaller h, and we prefer a larger δ 0 since it helps reduce false positives under small sample sizes. For finite sample, we recommend to set δ 0 = 0.1 and h = √ T , which makes the weakest detectable signal by MNBS to be of order O (log n) 0.6 n 1/2 T 1/4 . For the neighborhood size B 0 , [39] demonstrates that the performance of the neighborhood-based estimation is robust to the choice of B 0 in the range of [e −1 , e 2 ]. Following [39], we recommend to set B 0 = 1, 2 or 3. Note that the number of neighbors in MNBS is B 0 (n log n/h) 1/2 . To control the variance of MNBS, we suggest choosing a B 0 such that B 0 (n log n/h) 1/2 > 10. For all the following simulations, we set B 0 = 3 where B 0 = 1, 2 give similar numerical performance. To study the sensitivity of D 0 w.r.t. false positives under finite sample, we simulate dynamic networks {A (t) } T t=1 with no change-points from SBM-III,VIII,XI, and Graphons-I,II,III. We vary D 0 (thus the threshold ∆ D ) and examine the performance of MNBS. We vary the sample size at n = 100, 500, 1000 and T = 100, 500. For each combination of the sample size (T, n) and the network model, we conduct the simulation 100 times. Figure 1 reports the curves of average |Ĵ nT − J nT |, which is the difference between estimated number change-points and true number of change-points, versus D 0 under the six different network models. Note that J nT ≡ 0, thus |Ĵ nT − J nT | =Ĵ nT and the discrepancy represents the significance of false positive detection. As can be seen clearly, under various models and various sample sizes, the region with D 0 ≥ 0.25 controls the false positive reasonably well for MNBS. Thus, in practice we recommend setting D 0 = 0.25 for largest power. Note that out of the four stochastic block models (SBM-III,VIII,XI and Graphon-I), the most challenging case is SBM-VIII, which may be due to its imbalanced block size. Table 3. As can be seen, the performance of MNBS is reasonably well and improves as n increases, with some false positives when T = 500, n = 100, i.e. the large T and small n case. The reason is due to the relatively large variance of MNBS incurred by the small size of neighborhood. As for CZ, the empirical type-I error is roughly controlled at the target level 0.05 for most cases while experiencing some inflated levels under T = 500, n = 100. The numerical experiments demonstrate that MNBS provides robust and stable performance across a wide range of tuning parameters. To summarize, in practice, we recommend setting h = √ T , δ 0 = 0.1, B 0 ∈ {1, 2, 3} and D 0 ≥ 0.25. Additional results on synthetic networks In this section, we provide additional numerical experiments for comparing the performance of MNBS and CZ. Specifically, we design three additional dynamic stochastic block models (DSBM-IV to DSBM-VI) and conduct change-point detection analysis. Signal levels: The signal levels ∆ * of DSBM-I to DSBM-VI are summarized in Table 4. The signal levels of MDSBM-I and MDSBM-II are summarized in Table 5. Again, we define the normalized d 2,∞ norm as d 2,∞ (P, Q) = n −1/2 P − Q 2,∞ = max i n −1/2 P i· − Q i· 2 and the normalized Frobenius norm as d F (P, Q) = n −1 P − Q F . Note that in general, the signal level measured by Frobenius norm d F is of considerably smaller order than the one measured by d 2,∞ norm, indicating that d 2,∞ norm is more sensitive to changes. This phenomenon is especially significant for DSBM-III, where only one node switches membership after the change-point, making the change very challenging to detect in terms of Frobenius norm. P i − P i+1 , i = 1, . . . , J. τ 1 τ 2 τ 3 τ 4 MDSBM-I (d 2 2,∞ ) 0.3 2 1 3T 1/4 n 1/3 1 3T 1/4 n 1/3 - MDSBM-I (d 2 F ) 8 · 0.3 2 3T 1/4 n 1/3 2 9T 1/4 n 1/3 2 9T 1/4 n 1/3 - MDSBM-II (d 2 2,∞ ) 0.3 2 1 3T 1/4 n 1/3 1 3T 1/4 n 1/3 1 3T 1/4 n 1/3 MDSBM-II (d 2 F ) 8 · 0.3 2 3T 1/4 n 1/3 2 9T 1/4 n 1/3 2 9T 1/4 n 1/3 2 9T 1/4 n 1/3 Simulation setting and result: We set the sample size as n = 100, 500, 1000 and T = 100, 500. For each combination of sample size (T, n) and DSBMs, we conduct the simulation 100 times. The performance of MNBS and CZ for DSBM-IV to DSBM-VI are reported in Table 6, where we report the average number of estimated change-pointsĴ and the average Boysen distance ξ 1 = ξ( J nT ||J nT ) for under-segmentation error and ξ 2 = ξ(J nT || J nT ) for over-segmentation error. In general, both MNBS and CZ provide satisfactory performance for DSBM-IV to DSBM-VI, while MNBS offers superior performance with more accurate estimated number of change-pointsĴ and smaller Boysen distances ξ 1 , ξ 2 for both under and over-segmentation errors. Table 6: Average number of estimated change-pointsĴ and Boysen distances ξ 1 , ξ 2 by MNBS and CZ for DSBM-IV to DSBM-VI under single change-point scenarios. MNBS In this section, we describe another estimator for link probability matrix P based on repeated observations of a dynamic network over time via singular value thresholding. This is a modification of the universal singular value thresholding (USVT) procedure proposed by [6]. More specifically, as in the main text, assume DSBM-IV DSBM-V DSBM-VÎ J ξ 1 ξ 2Ĵ ξ 1 ξ 2Ĵ ξ 1 ξ 2 T = 100,A (t) (t = 1, . . . , T ) such that A (t) ij ∼ Bernoulli(P ij ) for i ≤ j, independently. LetĀ = T t=1 A (t) /T. In applying MUSVT to estimating P , major steps can be summarized as follows: 1. LetĀ = n i=1 s i u i u T i be the singular value decomposition of the average adjacent matrix A. Let S = {i : s i ≥ (2 + η) √ n √ T }, where η ∈ (0, 1) is some small positive number. Let Ā = i∈S s i u i u T i . 3. Let P = ( P ij ), where P ij :=      Ā ij , if 0 ≤ Ā ij ≤ 1 1, if Ā ij ≥ 1 0, if Ā ij ≤ 0. P serves as the final estimate for P . The key distinction between our estimate and the one in [6] is that we utilizeĀ, which allows us to lower the threshold level from an order of √ n to n/T . Theorem 8.1 quantifies the rate on P in approximating P . Theorem 8.1. Assume P arises from a graphon f that is piecewise Lipschitz as defined in Definition 3.1, then the following holds: P d F ( P , P ) 2 ≥ C(f, n, δ) 1 n 1/3 T 1/4 ≤ (n, T ), and (n, T ) → 0 as n, T → ∞, under the condition that T ≤ n 1−a for some constant a > 0, where d F (·, ·) stands for the normalized (by 1/n) Frobenius distance and C(f, n, δ) is a constant depending on f , n and δ. Proof. The key gradients in the proof are to bound the spectral norm betweenĀ and P as well as the nuclear norm of P . Specifically, By Lemma 3.5 in [6], one has P − P F ≤ K(δ) Ā − P P * 1/2 ,(4) By Theorem 3.4 in [6], one has under the conditions that T < n 1−a for some a > 0, P Ā − P ≥ (2 + η) n/T ≤ C 1 ( ) exp (−C 2 n/T ) .(5) where K(δ) = (4 + 2δ) 2/δ + √ 2 + δ, · F is the Frobineus norm, · stands for the spectral norm and · * is nuclear norm. The bound on P * is exactly the same as that in Theorem 2.7 in [6]. Only the term Ā − P affects the improved rate by a factor of 1/T 1/4 , resulting in the new rate of the theorem. The MUSVT procedure proposed above can be used as the initial graphon estimate in designing our change-point detection algorithm. We can prove consistency of the change-point estimation and obtain similar results as in Theorem 4.1 for MNBS, only requiring a slightly higher threshold level and stronger conditions on the minimal true signal strength. The difference in the rates of the two quantities are affected by the quality of the initial graphon estimates. Although the rates in Theorem 8.1 and the resulting requirements for consistency in change-point detection are not as good as those of MNBS, MUSVT enjoys some computational advantages when the number of nodes n is large, thus may still serve as an alternative practically. 9 Supplementary material: proof of theorems Proposition 9.1 (Bernstein inequality). Let X 1 , X 2 , . . . , X n be independent zero-mean random variables. Suppose that |X i | ≤ M a.s. for all i. Then for all positive t, we have P n i=1 X i > t ≤ exp − 1 2 t 2 n i=1 E(X 2 i ) + 1 3 M t . By Proposition 9.1, for a sequence of independent Bernoulli random variables where X i ∼ Bernoulli(p i ), we have P n i=1 (X i − p i ) > t ≤ 2 exp − 1 2 t 2 n i=1 p i (1 − p i ) + 1 3 t . In the following, we prove the theoretical properties of MNBS by first giving two Lemmas, which extends the result of Lemmas 1 and 2 in [39] to the case where T ≥ 1 repeated observations of a network is available. Denote I k = [x k−1 , x k ) for 1 ≤ k ≤ K − 1 and I K = [x K−1 , 1] for the intervals in Definition 3.1, and denote δ = min 1≤k≤K |I k |. For any ξ ∈ [0, 1], let I(ξ) denote the I k that contains ξ. Let S i (∆) = [ξ i − ∆, ξ i + ∆] ∩ I(ξ i ) denote the neighborhood of ξ i in which f (x, y) is Lipschitz in x ∈ S i (∆) for any fixed y. Lemma 9.2 (Neighborhood size). For any global constants C 1 > B 1 > 0, define ∆ n = C 1 log n n 1/2 ω . If n 1/2 ω · (C1−B1) 2 7C1−B1 > γ + 1, there existsC 1 > 0 such that for n large enough so that ∆ n < min k |I k |/2, we have P min i |{i = i : ξ i ∈ S i (∆ n )}| n − 1 ≥ B 1 log n n 1/2 ω ≥ 1 − 2n −(C1+γ) . Proof of Lemma 9.2. For any i, by the definition of S i (∆ n ), we know that ∆ n ≤ |S i (∆ n )| ≤ 2∆ n . By Bernstein inequality, we have P |{i = i : ξ i ∈ S i (∆ n )}| n − 1 − |S i (∆ n )| > n ≤ 2 exp − 1 2 (n − 1) 2 2 n (n − 1)2∆ n + 1 3 (n − 1) n ≤ 2 exp − 1 3 n 2 n 2∆ n + 1 3 n . Take a union bound over all i's gives P max i |{i = i : ξ i ∈ S i (∆ n )}| n − 1 − |S i (∆ n )| > n ≤ 2n exp − 1 3 n 2 n 2∆ n + 1 3 n . Let ∆ n = C 1 log n n 1/2 ω and n = C 2 log n n 1/2 ω with C 2 = C 1 − B 1 > 0, we have P max i |{i = i : ξ i ∈ S i (∆ n )}| n − 1 − |S i (∆ n )| > n ≤2n exp − 1 3 n 2 n 2∆ n + 1 3 n ≤ 2n exp − 1 3 nC 2 2 log n n 1/2 ω 2C 1 + 1 3 C 2 =2n 1− n 1/2 ω C 2 2 6C 1 +C 2 ≤ 2n −(C1+γ) , for someC 1 > 0 as long as n 1/2 ω · C 2 2 6C1+C2 = n 1/2 ω · (C1−B1) 2 7C1−B1 > 1 + γ. Thus, with probability 1 − 2n −(C1+γ) , we have min i |{i = i : ξ i ∈ S i (∆ n )}| n − 1 ≥ min i S i (∆ n ) − n ≥ ∆ n − n = (C 1 − C 2 ) log n n 1/2 ω = B 1 log n n 1/2 ω . This completes the proof of Lemma 9.2. · T ω 2 > 2 + γ and C3 2 · n 1/2 ω > 2 + γ hold, there existsC 2 > 0 such that if n is large enough so that (i) all conditions on n in Lemma 9.2 hold; (ii) B 1 n 1/2 log n ω ≥ 4, then the neighborhood N i has the following properties: 1. |N i | ≥ B 0 n 1/2 log n ω . 2. With probability 1 − 2n −(C1+γ) − 2n −(C2+γ) , for all i and i ∈ N i , we have P i· − P i · 2 2 /n ≤ (6LC 1 + 24C 3 ) log n n 1/2 ω . Proof of Lemma 9.3. The first claim follows immediately from the definition of quantile and q, since |N i | ≥ n · q = nB 0 log n n 1/2 ω = B 0 n 1/2 log n ω . To prove the second claim, we first give a concentration result. For any i, j such that i = j, we have (Ā 2 /n) ij − (P 2 /n) ij = k (Ā ikĀkj − P ik P kj ) /n ≤ k =i,j (Ā ikĀkj − P ik P kj ) n − 2 · n − 2 n + (Ā iiĀij − P ii P ij ) n + (Ā ijĀjj − P ij P jj ) n . We can easily show that Var(Ā ikĀkj ) = P 2 ik P kj (1 − P kj ) + P 2 kj P ik (1 − P ik ) T + P ik (1 − P ik )P kj (1 − P jk ) T 2 ≤ 1 T . Thus, by the independence amongĀ ikĀkj and Bernstein inequality, we have P   k =i,j (Ā ikĀkj − P ik P kj ) n − 2 ≥ n   ≤ 2 exp − 1 2 (n − 2) 2 2 n (n − 2) 1 T + 1 3 (n − 2) n ≤ 2 exp − 1 3 n 2 n 1 T + 1 3 n . Take a union bound over all i = j, we have P   max i,j:i =j k =i,j (Ā ikĀkj − P ik P kj ) n − 2 ≥ n   ≤ 2n 2 exp − 1 3 n 2 n 1 T + 1 3 n (6) ≤2n 2 max exp − 1 6 nT 2 n , exp − 1 2 n n . Let n = C 3 log n n 1/2 ω , we have 2n 2 exp − 1 6 nT 2 n = 2n 2 exp − 1 6 nT C 2 3 (log n) 2 nω 2 = 2n 2− C 2 3 log n 6 · T ω 2 ≤ 2n −(C2+γ) /3,2n 2 exp − 1 2 n n = 2n 2 exp − 1 2 nC 3 log n n 1/2 ω = 2n 2− C 3 n 1/2 2ω ≤ 2n −(C2+γ) /3, for someC 2 > 0 as long as C 2 3 log n 6 · T ω 2 > 2 + γ and C3 2 · n 1/2 ω > 2 + γ. Similarly, we have · n ω 2 · T > 2 + γ and C3 2 · n 1/2 ω > 2 + γ Thus, combine the above results, we have that with probability 1 − 2n −(C2+γ) , max i,j:i =j (Ā 2 /n) ij − (P 2 /n) ij ≤ 3 n = 3C 3 log n n 1/2 ω . P max i,j:i =j (Ā iiĀij − P ii P ij ) n > n ≤ 2n 2 exp − Following the same argument as [39], we have that for all i and anyĩ such that ξĩ ∈ S i (∆ n ), (P 2 /n) ik − (P 2 /n)ĩ k = | P i· , P k· − Pĩ · , P k· | /n ≤ P i· − Pĩ · 2 P k· 2 /n ≤ L∆ n , for all k = 1, . . . , n, where the last inequality follows from the piecewise Lipschitz condition of the graphon such that |Pĩ l − P il | = |f (ξĩ, ξ l ) − f (ξ i , ξ l )| ≤ L|ξĩ − ξ i | ≤ L∆ n for all l = 1, . . . , n, and from P k· 2 ≤ n 1/2 for all k. We now try to upper boundd(i, i ) for all i ∈ N i . We first boundd(i,ĩ) for allĩ with ξĩ ∈ S i (∆ n ) simultaneously. By above, we know that with probability 1 − 2n −(C2+γ) , we havẽ d(i,ĩ) = max k =i,ĩ (Ā 2 /n) ik − (Ā 2 /n)ĩ k ≤ max k =i,ĩ (P 2 /n) ik − (P 2 /n)ĩ k + 2 max i,j:i =j (Ā 2 /n) ij − (P 2 /n) ij ≤ L∆ n + 6C 3 log n n 1/2 ω , for all i and anyĩ such that ξĩ ∈ S i (∆ n ). By the above result and Lemma 9. We are now ready to complete the proof of the second claim of Lemma 9.3. With probability 1 − 2n −(C1+γ) − 2n −(C2+γ) , for n large enough such that min i |{i = i : ξ i ∈ S i (∆ n )}| ≥ B 1 n 1/2 log n ω ≥ 4 (by Lemma 9.2), we have that for all i and i ∈ N i , we can findĩ ∈ S i (∆ n ), i ∈ S i (∆ n ) such thatĩ, i,ĩ , i are different from each other and P i· − P i · 2 2 /n = (P 2 /n) ii − (P 2 /n) i i + (P 2 /n) i i − (P 2 /n) ii ≤ |(P 2 /n) ii − (P 2 /n) i i | + |(P 2 /n) i i − (P 2 /n) ii | ≤ |(P 2 /n) iĩ − (P 2 /n) i ĩ | + |(P 2 /n) i ĩ − (P 2 /n) iĩ | + 4L∆ n ≤ |(Ā 2 /n) iĩ − (Ā 2 /n) i ĩ | + |(Ā 2 /n) i ĩ − (Ā 2 /n) iĩ | + 4L∆ n + 12C 3 log n n 1/2 ω ≤ 2 max k =i,i |(Ā 2 /n) ik − (Ā 2 /n) i k | + 4L∆ n + 12C 3 log n n 1/2 ω = 2d(i, i ) + 4L∆ n + 12C 3 log n n 1/2 ω ≤ 6L∆ n + 24C 3 log n n 1/2 ω = (6LC 1 + 24C 3 ) log n n 1/2 ω . This completes the proof of Lemma 9.3. Based on Lemma 9.2 and 9.3, we are now ready to prove Theorem 3.2, which provides the error bound for MNBS. Proof of Theorem 3.2. To prove Theorem 3.2, it suffices to show that with high probability, the following holds for all i. 1 n j (P ij − P ij ) 2 ≤ C · log n n 1/2 ω . We first perform a bias-variance decomposition such that 2. Lemma 9.3: C 2 3 log n 6 · T ω 2 > 2 + γ and C3 2 · n 1/2 ω > 2 + γ and B 1 ≥ B 0 ; 3. C 2 4 (log n) 3 4 · n ω 2 · T 2 ω 2 > (1 + γ) and 3C4 log n 4 · n ω 2 > 1 + γ; C 2 5 log n 6 · T 2 ω 2 > 2 + γ and C5 2 · n 1/2 ω > 2 + γ; ω 2 T (log n) 2 ≤ 1. It is easy to see that, for any γ > 0 and B 0 > 0, we can always find B 1 , C 1 , C 3 , C 4 , C 5 such that all inequalities in (1)-(3) hold for all n large enough as long as ω ≤ min(n 1/2 , (T log n) 1/2 ). Take ω = min(n 1/2 , (T log n) 1/2 ), this completes the proof of Theorem 3.2. The key observation is that for both an h-flat point and an true change-point, the adjacency matrices {A (i) } t i=t−h+1 or {A (i) } t+h i=t+1 that are used in the estimation ofP t1,h orP t2,h are generated by the same probability matrix P and thus the result of Theorem 3.2 can be directly applied. By assumption we have (h log n) 1/2 < n 1/2 , thus ω = min(n 1/2 , (h log n) 1/2 ) = (h log n) 1/2 . Thus by Theorem 3.2, for any t that is an h-flat point, we have P (D(t, h) > ∆ D ) = P (d 2,∞ (P t1,h ,P t2,h ) 2 > ∆ D ) ≤P (d 2,∞ (P t1,h ,P t1,h ) 2 + d 2,∞ (P t2,h ,P t2,h ) 2 > ∆ D /2) ≤P (d 2,∞ (P t1,h ,P t1,h ) 2 > ∆ D /4) + P (d 2,∞ (P t2,h ,P t2,h ) 2 > ∆ D /4) ≤ 2n −γ , where the second to last inequality uses the fact thatP t1,h =P t2,h for an h-flat point, and the last inequality follows from Theorem 3.2 and the fact that ∆ D /(C(log n) 1/2 /(n 1/2 h 1/2 )) → ∞ for any C > 0. For any t that is a true change-point, we have P (D(t, h) > ∆ D ) = P (d 2,∞ (P t1,h ,P t2,h ) 2 > ∆ D ) ≥P (d 2,∞ (P t1,h ,P t2,h ) − d 2,∞ (P t1,h ,P t1,h ) − d 2,∞ (P t2,h ,P t2,h ) > ∆ D ) =P (d 2,∞ (P t1,h ,P t1,h ) + d 2,∞ (P t2,h ,P t2,h ) < d 2,∞ (P t1,h ,P t2,h ) − ∆ D ) ≥P (d 2,∞ (P t1,h ,P t1,h ) + d 2,∞ (P t2,h ,P t2,h ) < ∆ D ( ∆ * /∆ D − 1)) ≥1 − P (d 2,∞ (P t1,h ,P t1,h ) 2 > ∆ D ( ∆ * /∆ D − 1) 2 /4) − P (d 2,∞ (P t2,h ,P t2,h ) 2 > ∆ D ( ∆ * /∆ D − 1) 2 /4) ≥ 1 − 2n −γ , where the second inequality uses the fact that d 2,∞ (P t1,h ,P t2,h ) ≥ √ ∆ * for a true change-point, and the last inequality follows from Theorem 3.2 and the fact that ∆ D /(C(log n) 1/2 /(n 1/2 h 1/2 )) → ∞ for any C > 0. Let F h be the set of all flat points t and J be the set of all true change-points. Consider the event A τ = {D(τ, h) > ∆ D } for true change-points τ ∈ J and the event B t = {D(t, h) < ∆ D } for flat points t ∈ F h . Define the event ξ n = τ ∈J A τ t∈F h B t . By the above result, we have that P (ξ n ) = 1 − P (ξ c n ) ≥ 1 − P τ ∈J A c τ − P t∈F h B c t ≥ 1 − 2T n −γ → 1, as long as T n −γ → 0. We now prove that ξ n implies the event {Ĵ = J} ∩ {J ⊂:Ĵ ± h}. Under ξ n , no flat point will be selected at the thresholding steps. Thus, for any pointτ ∈ J , there is at least one change-point in its neighborhood {τ − h + 1, . . . ,τ + h − 1}. On the other hand, by assumption h < D * /2, thus, there exists at most one change-point in {τ − h + 1, . . . ,τ + h − 1}. Together, it implies that there is exactly one change-point in {τ − h + 1, . . . ,τ + h − 1} for eachτ ∈Ĵ . Meanwhile, under ξ n , for every true change-point τ ∈ J , we have D(τ, h) > ∆ D . Note that τ − h and τ + h are h-flat points since h < D * /2, thus max(D(τ + h, h), D(τ − h, h)) < ∆ D . Thus, for every true change-point τ, there exists a local maximizer, sayτ , which is in {τ −h+1, . . . , τ +h−1} with D(τ , h) ≥ D(τ, h) > ∆ D . Combining the above result, we have that P {Ĵ = J} ∩ {J ⊂:Ĵ ± h} → 1.
9,660
1908.01536
2965290354
Current techniques for explainable AI have been applied with some success to image processing. The recent rise of research in video processing has called for similar work n deconstructing and explaining spatio-temporal models. While many techniques are designed for 2D convolutional models, others are inherently applicable to any input domain. One such body of work, deep Taylor decomposition, propagates relevance from the model output distributively onto its input and thus is not restricted to image processing models. However, by exploiting a simple technique that removes motion information, we show that it is not the case that this technique is effective as-is for representing relevance in non-image tasks. We instead propose a discriminative method that produces a naive representation of both the spatial and temporal relevance of a frame as two separate objects. This new discriminative relevance model exposes relevance in the frame attributed to motion, that was previously ambiguous in the original explanation. We observe the effectiveness of this technique on a range of samples from the UCF-101 action recognition dataset, two of which are demonstrated in this paper.
Inflating convolutional layers to 3D for video tasks was first explored in @cite_15 , in which the authors chose to optimise an architecture for the video task, rather than adapt one from an image problem. Both @cite_1 and @cite_5 have adapted large image classification models (Inception and ResNet respectively) to activity recognition tasks, such as @cite_12 @cite_14 . Aside from the added dimensionality, these architectures are much the same as in image tasks, and intuitively find similar success in the spatio-temporal domain as they do in the spatial domain, achieving state-of-the-art performance. These models are as complex and black-box in nature as their 2D counterparts and as such the motivation to explain them also translates.
{ "abstract": [ "We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.", "The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels. Recently, the performance levels of 3D CNNs in the field of action recognition have improved significantly. However, to date, conventional research has only explored relatively shallow3Darchitectures. We examine the architectures of various 3D CNNs from relatively shallow to very deep ones on current video datasets. Based on the results of those experiments, the following conclusions could be obtained: (i) ResNet-18 training resulted in significant overfitting for UCF-101, HMDB-51, and ActivityNet but not for Kinetics. (ii) The Kinetics dataset has sufficient data for training of deep 3D CNNs, and enables training of up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet. (iii) Kinetics pretrained simple 3D architectures outperforms complex2D architectures, and the pretrained ResNeXt-101 achieved 94.5 and 70.2 on UCF-101 and HMDB-51, respectively. The use of 2D CNNs trained on ImageNet has produced significant progress in various tasks in image. We believe that using deep 3D CNNs together with Kinetics will retrace the successful history of 2D CNNs and ImageNet, and stimulate advances in computer vision for videos. The codes and pretrained models used in this study are publicly available.", "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.", "We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers." ], "cite_N": [ "@cite_14", "@cite_1", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "24089286", "2619082050", "2770565591", "1983364832", "2619947201" ] }
Discriminating Spatial and Temporal Relevance in Deep Taylor Decompositions for Explainable Activity Recognition
Recent success in solving image recognition problems can be attributed to the application to these problems of increasingly complex convolutional neural networks (CNNs) that make use of spatial convolutional feature extractors. This success has been closely followed by a call for explainability and transparency of these networks, which have inherently been regarded as black-boxes. Significant efforts have been made towards explaining decisions in image recognition tasks, with nearly all of these producing explanations through an image medium [Bach et al., 2015;Simonyan et al., 2013;Baehrens et al., 2010;Zhou et al., 2016]. Even more recently, the success in solving image recognition problems has been followed by the appearance of analogous video recognition models that make effective use of CNNs by extending the convolution dimensionality to be spatio-temporal or 3D [Ji et al., 2013;Carreira and Zisserman, 2017;Hara et al., 2018]. Intuitively, the same methods that have been successful in explaining image models can be applied to video. Many of these methods, notably the popular deep Taylor decomposition [Montavon et al., 2017], function on any input without modification. However, the additional temporal dimension is not conceptually similar and, hence, exchangeable with the two spatial dimensions in the input. This is not accounted for by the model, which simply convolves the input in all dimensions in the same manner. This is reflected in explanations using image-based methods like deep Taylor. A pixel, or voxel in this case, is not marked whether it is temporally or spatially relevant, but on the basis of its combined spatio-temporal relevance. By applying the deep Taylor method to a 3D convolutional network trained on the UCF-101 activity recognition dataset [Soomro et al., 2012], and additionally explaining the same class for each individual frame as a separate input, we effectively explain the spatial relevance of that frame. We show that by subtracting this from the original explanation, one can expose the underlying relevance attributed to motion in the frame. Thus we propose a new discriminative relevance model, which reveals the relevance attributed to motion, that was previously hidden by the accompanying spatial component. Spatial and Temporal Relevance in 3D CNNs 3D CNNs 3D CNNs extend the convolutional layers to the third dimension. In 3D CNNs, a sliding cube passes over a 3D block formed by the frames of the video stacked one on top of another, as opposed to a sliding 2D window passing over the image in 2D CNNs. This results not only in spatial features, but also features of motion, being learned. In the process of explaining the input, however, the relevance for the video is deconstructed into the original frames, which can be animated in the same manner as the input itself. Although the frames can be staggered, made transparent, and visualised as a block (see [Stergiou et al., 2019] for an example), the explanation is essentially viewed as a series of images; this is also the case with [Srinivasan et al., 2017]. In this manner it is impossible to distinguish the effect of the motion of the objects in the frame. At the same time, discerning whether a segment of the frame is considered relevant because of its shape and colour, its spatial information, or because of its relationship to similar areas in the neighbouring frames, i.e., motion, can be important for explaining the decisions a CNN makes. While one can infer important frames from the approach in [Srinivasan et al., 2017], this does not address the issue of spatially relevant objects that aren't visible in other frames, nor does it necessarily localise the temporally important regions. Separating Spatial and Temporal Components through Discriminative Relevance Distinguishing spatial and temporal relevance of pixels (voxels) when using 3D CNNs is not always possible. A kernel does not necessarily focus on either spatial or temporal features, but in most cases will have components in both. As a result, decoupling spatial and temporal relevance is not intuitive given only the original explanation. Instead, the motion can be removed from the input video. By performing multiple additional forward passes, which each take as an input a single frame from the original video, the temporal information can effectively be excluded from the model's decision. The resulting video would depict a freeze frame at that instant, and thus the model would only have the spatial information in the scene to infer from. By doing this, we can build up an image of the purely spatial relevance in each frame. The required additional computation scales linearly with the size of the temporal dimension of an input sample. Intuitively, by downweighting relevance in the original explanation by the spatial relevance reconstructed from each frame's explanation, what is left will be the relevance based on motion. Implementation All work presented in this article is implemented in Python, using the deep learning platform PyTorch, with the autograd functionality modified to apply LRP rules during backpropagation in place of the normal gradient calculation. The explain PyTorch extension The PyTorch autograd library provides the basis on which symbolic graphs are created. The Function class can be extended to create mathematical operations with custom gradient functions. We adapted this library to work with LRP techniques as the unofficial explain library 1 . On a forward pass, a model using the explain Function class will act in the same way as a PyTorch model using the autograd Function class. Weights are also loaded into the model in the same way as PyTorch models. The functionality differs on the backwards pass. The autograd library allows for custom backwards functions to be defined. Through this feature, we implemented convolutional, batch norm, pooling and linear layers whose gradient functions instead propagate relevance. In this way, we can generate an LRP explanation for a model given an input, by performing a forward pass, and then backpropagating the relevance, beginning at the chosen output neuron, back onto the input sample. Model The model is a 3D CNN following the C3D design from [Tran et al., 2015], the architecture for which is shown in Figure 1. The code is adapted from an implementation available at https://github.com/jfzhang95/pytorch-video-recognition. We fine-tuned the pretrained weights made available, to 75% validation accuracy on the UCF-101 dataset. Deep Taylor Decomposition Our implementation of deep Taylor decomposition follows the most up-to-date version found on https://github.com/albermax/innvestigate, a repository maintained by one of the lead authors on the LRP and deep Taylor papers. The implementation is summarised as follows: • As in the original deep Taylor paper, ReLU nonlinearities simply pass on relevance, without modifying it in any way. R k = R j • Relevance for pooling layers is generated by multiplying the input to the layer by the relevance w.r.t that input, calculated using regular backpropagation rules for pooling layers. For max-pooling: R k = δ k j R j Where δ k j is a mask of whether the neuron k was selected by the pooling kernel j. For average-pooling: R k = R j N j 1 This code is freely available at https://github.com/liamhiley/torchexplain Where N j is the number of values in the pooling kernel j. • The convolutional layers use the αβ relevance rule, which focuses explanations by injecting some negative relevance. R i = j (α z + ij i z + ij + b + j − β z − ij i z − ij + b − j )R j • The first convolutional layer (for which the input is the sample) uses the z β rule which makes use of the restricted input space (0 to 255 for pixel values) in finding a root point. R i = j z ij − l i w + ij − h i w − ij i z i j − l i w + i j − h i w − i j R j Padding Without the use of adaptive pooling, the single frame inputs would be too small to pass through the network. Still, this would result in more of the spatial information for the frame conserved than originally as part of the video. Instead, we padded the frame to the same size as the input video. We chose to pad the input by repeating the frame n times, where n a typical sample size-number, rather than use zero padding, which would create false temporal information through the near-instant change from all pixels to black. This is supported by the findings in [Hooker et al., 2018] where a similar issue arose when quantifying the accuracy of feature attribution techniques like LRP by replacing relevant pixels with zero-black pixels. Results In this section we show the result of subtracting spatial relevance from a deep Taylor explanation of an input video, featuring a person performing pull-ups on a bar. The three different explanations can be seen in Figure 2. The frames shown are processed, and explained as a three dimensional block, and displayed as two dimensional slices. In the original relevance ( Figure 2: Row 2), most of the scene is marked relevant, with heavy focus on edges. This observation is reinforced by the spatial relevance (Figure 2: Row 3), which in the absence of temporal information, highlights edges much more heavily. The agreement between the spatial and original explanations demonstrates the ambiguity in 3D explanations with gradient-based techniques like deep Taylor. It is unclear even with the spatial explanation as a reference, what in the scene is relevant for its motion, as every object is to a degree marked relevant. The difference becomes clearer with the inclusion of the temporal explanation. Subtracting the spatial explanation from the original explanation, shows a large amount of remaining relevance in the Figure 2: Left: The 1 st , 6 th , 10 th and 16 th frames respectively, from a 16-frame sample. Right: The 1 st , 2 nd , 3 rd and 4 th frames respectively, from a 16-frame sample. 1 st row: the original frames; 2 nd row: the deep Taylor explanation for the sample; 3 rd row: the spatial-only deep Taylor explanation for each frame; 4 th row: the remaining temporal explanation after subtracting (3 rd ) from (2 nd ). Red is positive relevance, blue is negative relevance, white is no relevance. core of the man's body and his head. The relevance in the background, the metal frame and the video watermark are all negative as a result, suggesting they are all highly spatially relevant. This effect is displayed for the beginning, end, and two intermediate frames of the sample. The bulk of the temporal relevance is found at the highest and lowest points of the exercise, but is absent from the intermediate frames. This suggests that the motion at key moments of the activity are at the lowest and highest points of the pull up, possibly due to the sharp change in movement, as at these points both the lowering and raising of the body are observed. This information is much more difficult to infer from the original explanation, where the 1 st and 16 th frames are overall much more heavily red (or relevant). In this example, key frames as well as salient regions are highlighted for motion. In the second example, of a person serving a tennis ball, the activity is observed at a more fine-grained framerate. Likely because of this the relevance of motion is more regular over the 4 neighbouring frames, when compared to the sparselysampled frames of the pull ups. This serves more to highlight temporally relevant objects in the scene. Specifically, the tennis ball and the person's upper body, where the swinging motion originates from. Again, in the original explanation this information is not clear. In fact, the ball and upper torso are relatively indistinguishable from spatially relevant features like the lawn and the building in the background. As seen in the spatial explanation, the relevance of these regions has a spatial component as well. Also interesting is the change in prediction by the model when given only spatial information. For the 1 st , 6 th , 10 th and 16 th frame-only inputs, the model predicted Wall Push-Ups, Golf Swing, Clean And Jerk, and Clean And Jerk again, respectively. While only two samples are illustrated in this paper, similar results were observed for other test samples in the UCF-101 dataset indicating that these examples are not anomalous, and the method is general. The evidence that our method is an approximation is twofold. Firstly, the inequality between the sum of the spatial and temporal relevance, and the original relevance shows that the former are not true fractions of the latter. Furthermore, the fact that the spatial relevance for the non-dominant class (Pull Ups, when the model has decided Clean And Jerk) is more than the same frames explained as the dominant class (Pull Ups, when the model had decided Pull Ups) supports this. Conclusion In this paper we have introduced a new use case, for separating and visualising the spatial and temporal components of an explanation by deep Taylor decomposition, for a spatiotemporal CNN, that is easy to implement and takes relatively little extra computational cost. By exploiting a simple method of removing motion information from video input, we have essentially generated a negative mask that can be applied to an explanation that will remove the spatial component in the relevance. The resulting explanation provides much more insight into the salient motion in the input than the general relevance, which we show is noisy with misleading spatial relevance, i.e., most edges in the frame. While we expose an unsuitability in the current implementation of the deep Taylor method, for inputs with nonexchangeable dimensions, this work is ongoing. In the future, it will be necessary to formalise a method for exposing the true spatial relevance in the frame, as opposed to an approximation such as our method.
2,297
1908.01536
2965290354
Current techniques for explainable AI have been applied with some success to image processing. The recent rise of research in video processing has called for similar work n deconstructing and explaining spatio-temporal models. While many techniques are designed for 2D convolutional models, others are inherently applicable to any input domain. One such body of work, deep Taylor decomposition, propagates relevance from the model output distributively onto its input and thus is not restricted to image processing models. However, by exploiting a simple technique that removes motion information, we show that it is not the case that this technique is effective as-is for representing relevance in non-image tasks. We instead propose a discriminative method that produces a naive representation of both the spatial and temporal relevance of a frame as two separate objects. This new discriminative relevance model exposes relevance in the frame attributed to motion, that was previously ambiguous in the original explanation. We observe the effectiveness of this technique on a range of samples from the UCF-101 action recognition dataset, two of which are demonstrated in this paper.
A variety of approaches have been attempted for explaining decisions made by deep neural networks. For example, in @cite_8 the authors propose feature visualisation for CNNs, in which the input images are optimised to maximally activate each filter in the CNN convolutional layers, following work in @cite_3 on non-convolutional models. Local explanations, in the sense that they are local to a single input, explain the inputs contribution to the model decision using feature attribution; these have found much success in explaining deep image processing models. These methods in some way approximate the contribution to the models decision, most commonly in a supervised task, to its input variables, pixels or features at a higher level. This has been implemented in a number of ways, for example, through use of probability gradients @cite_11 , global average pooling @cite_7 and its generalisation to networks with hidden layers in @cite_6 , or through local relevance based around a decision-neutral root point @cite_10 @cite_13 @cite_4 . These works are all considered in that they use information from the model internal parameters, i.e., its weights and activations, in generating an explanation.
{ "abstract": [ "", "We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks.", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL A demo and a video of the demo can be found at this http URL and youtu.be COjUB9Izk6E.", "", "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.", "" ], "cite_N": [ "@cite_13", "@cite_4", "@cite_7", "@cite_8", "@cite_6", "@cite_3", "@cite_10", "@cite_11" ], "mid": [ "", "2776207810", "2950328304", "2962851944", "2616247523", "", "1787224781", "" ] }
Discriminating Spatial and Temporal Relevance in Deep Taylor Decompositions for Explainable Activity Recognition
Recent success in solving image recognition problems can be attributed to the application to these problems of increasingly complex convolutional neural networks (CNNs) that make use of spatial convolutional feature extractors. This success has been closely followed by a call for explainability and transparency of these networks, which have inherently been regarded as black-boxes. Significant efforts have been made towards explaining decisions in image recognition tasks, with nearly all of these producing explanations through an image medium [Bach et al., 2015;Simonyan et al., 2013;Baehrens et al., 2010;Zhou et al., 2016]. Even more recently, the success in solving image recognition problems has been followed by the appearance of analogous video recognition models that make effective use of CNNs by extending the convolution dimensionality to be spatio-temporal or 3D [Ji et al., 2013;Carreira and Zisserman, 2017;Hara et al., 2018]. Intuitively, the same methods that have been successful in explaining image models can be applied to video. Many of these methods, notably the popular deep Taylor decomposition [Montavon et al., 2017], function on any input without modification. However, the additional temporal dimension is not conceptually similar and, hence, exchangeable with the two spatial dimensions in the input. This is not accounted for by the model, which simply convolves the input in all dimensions in the same manner. This is reflected in explanations using image-based methods like deep Taylor. A pixel, or voxel in this case, is not marked whether it is temporally or spatially relevant, but on the basis of its combined spatio-temporal relevance. By applying the deep Taylor method to a 3D convolutional network trained on the UCF-101 activity recognition dataset [Soomro et al., 2012], and additionally explaining the same class for each individual frame as a separate input, we effectively explain the spatial relevance of that frame. We show that by subtracting this from the original explanation, one can expose the underlying relevance attributed to motion in the frame. Thus we propose a new discriminative relevance model, which reveals the relevance attributed to motion, that was previously hidden by the accompanying spatial component. Spatial and Temporal Relevance in 3D CNNs 3D CNNs 3D CNNs extend the convolutional layers to the third dimension. In 3D CNNs, a sliding cube passes over a 3D block formed by the frames of the video stacked one on top of another, as opposed to a sliding 2D window passing over the image in 2D CNNs. This results not only in spatial features, but also features of motion, being learned. In the process of explaining the input, however, the relevance for the video is deconstructed into the original frames, which can be animated in the same manner as the input itself. Although the frames can be staggered, made transparent, and visualised as a block (see [Stergiou et al., 2019] for an example), the explanation is essentially viewed as a series of images; this is also the case with [Srinivasan et al., 2017]. In this manner it is impossible to distinguish the effect of the motion of the objects in the frame. At the same time, discerning whether a segment of the frame is considered relevant because of its shape and colour, its spatial information, or because of its relationship to similar areas in the neighbouring frames, i.e., motion, can be important for explaining the decisions a CNN makes. While one can infer important frames from the approach in [Srinivasan et al., 2017], this does not address the issue of spatially relevant objects that aren't visible in other frames, nor does it necessarily localise the temporally important regions. Separating Spatial and Temporal Components through Discriminative Relevance Distinguishing spatial and temporal relevance of pixels (voxels) when using 3D CNNs is not always possible. A kernel does not necessarily focus on either spatial or temporal features, but in most cases will have components in both. As a result, decoupling spatial and temporal relevance is not intuitive given only the original explanation. Instead, the motion can be removed from the input video. By performing multiple additional forward passes, which each take as an input a single frame from the original video, the temporal information can effectively be excluded from the model's decision. The resulting video would depict a freeze frame at that instant, and thus the model would only have the spatial information in the scene to infer from. By doing this, we can build up an image of the purely spatial relevance in each frame. The required additional computation scales linearly with the size of the temporal dimension of an input sample. Intuitively, by downweighting relevance in the original explanation by the spatial relevance reconstructed from each frame's explanation, what is left will be the relevance based on motion. Implementation All work presented in this article is implemented in Python, using the deep learning platform PyTorch, with the autograd functionality modified to apply LRP rules during backpropagation in place of the normal gradient calculation. The explain PyTorch extension The PyTorch autograd library provides the basis on which symbolic graphs are created. The Function class can be extended to create mathematical operations with custom gradient functions. We adapted this library to work with LRP techniques as the unofficial explain library 1 . On a forward pass, a model using the explain Function class will act in the same way as a PyTorch model using the autograd Function class. Weights are also loaded into the model in the same way as PyTorch models. The functionality differs on the backwards pass. The autograd library allows for custom backwards functions to be defined. Through this feature, we implemented convolutional, batch norm, pooling and linear layers whose gradient functions instead propagate relevance. In this way, we can generate an LRP explanation for a model given an input, by performing a forward pass, and then backpropagating the relevance, beginning at the chosen output neuron, back onto the input sample. Model The model is a 3D CNN following the C3D design from [Tran et al., 2015], the architecture for which is shown in Figure 1. The code is adapted from an implementation available at https://github.com/jfzhang95/pytorch-video-recognition. We fine-tuned the pretrained weights made available, to 75% validation accuracy on the UCF-101 dataset. Deep Taylor Decomposition Our implementation of deep Taylor decomposition follows the most up-to-date version found on https://github.com/albermax/innvestigate, a repository maintained by one of the lead authors on the LRP and deep Taylor papers. The implementation is summarised as follows: • As in the original deep Taylor paper, ReLU nonlinearities simply pass on relevance, without modifying it in any way. R k = R j • Relevance for pooling layers is generated by multiplying the input to the layer by the relevance w.r.t that input, calculated using regular backpropagation rules for pooling layers. For max-pooling: R k = δ k j R j Where δ k j is a mask of whether the neuron k was selected by the pooling kernel j. For average-pooling: R k = R j N j 1 This code is freely available at https://github.com/liamhiley/torchexplain Where N j is the number of values in the pooling kernel j. • The convolutional layers use the αβ relevance rule, which focuses explanations by injecting some negative relevance. R i = j (α z + ij i z + ij + b + j − β z − ij i z − ij + b − j )R j • The first convolutional layer (for which the input is the sample) uses the z β rule which makes use of the restricted input space (0 to 255 for pixel values) in finding a root point. R i = j z ij − l i w + ij − h i w − ij i z i j − l i w + i j − h i w − i j R j Padding Without the use of adaptive pooling, the single frame inputs would be too small to pass through the network. Still, this would result in more of the spatial information for the frame conserved than originally as part of the video. Instead, we padded the frame to the same size as the input video. We chose to pad the input by repeating the frame n times, where n a typical sample size-number, rather than use zero padding, which would create false temporal information through the near-instant change from all pixels to black. This is supported by the findings in [Hooker et al., 2018] where a similar issue arose when quantifying the accuracy of feature attribution techniques like LRP by replacing relevant pixels with zero-black pixels. Results In this section we show the result of subtracting spatial relevance from a deep Taylor explanation of an input video, featuring a person performing pull-ups on a bar. The three different explanations can be seen in Figure 2. The frames shown are processed, and explained as a three dimensional block, and displayed as two dimensional slices. In the original relevance ( Figure 2: Row 2), most of the scene is marked relevant, with heavy focus on edges. This observation is reinforced by the spatial relevance (Figure 2: Row 3), which in the absence of temporal information, highlights edges much more heavily. The agreement between the spatial and original explanations demonstrates the ambiguity in 3D explanations with gradient-based techniques like deep Taylor. It is unclear even with the spatial explanation as a reference, what in the scene is relevant for its motion, as every object is to a degree marked relevant. The difference becomes clearer with the inclusion of the temporal explanation. Subtracting the spatial explanation from the original explanation, shows a large amount of remaining relevance in the Figure 2: Left: The 1 st , 6 th , 10 th and 16 th frames respectively, from a 16-frame sample. Right: The 1 st , 2 nd , 3 rd and 4 th frames respectively, from a 16-frame sample. 1 st row: the original frames; 2 nd row: the deep Taylor explanation for the sample; 3 rd row: the spatial-only deep Taylor explanation for each frame; 4 th row: the remaining temporal explanation after subtracting (3 rd ) from (2 nd ). Red is positive relevance, blue is negative relevance, white is no relevance. core of the man's body and his head. The relevance in the background, the metal frame and the video watermark are all negative as a result, suggesting they are all highly spatially relevant. This effect is displayed for the beginning, end, and two intermediate frames of the sample. The bulk of the temporal relevance is found at the highest and lowest points of the exercise, but is absent from the intermediate frames. This suggests that the motion at key moments of the activity are at the lowest and highest points of the pull up, possibly due to the sharp change in movement, as at these points both the lowering and raising of the body are observed. This information is much more difficult to infer from the original explanation, where the 1 st and 16 th frames are overall much more heavily red (or relevant). In this example, key frames as well as salient regions are highlighted for motion. In the second example, of a person serving a tennis ball, the activity is observed at a more fine-grained framerate. Likely because of this the relevance of motion is more regular over the 4 neighbouring frames, when compared to the sparselysampled frames of the pull ups. This serves more to highlight temporally relevant objects in the scene. Specifically, the tennis ball and the person's upper body, where the swinging motion originates from. Again, in the original explanation this information is not clear. In fact, the ball and upper torso are relatively indistinguishable from spatially relevant features like the lawn and the building in the background. As seen in the spatial explanation, the relevance of these regions has a spatial component as well. Also interesting is the change in prediction by the model when given only spatial information. For the 1 st , 6 th , 10 th and 16 th frame-only inputs, the model predicted Wall Push-Ups, Golf Swing, Clean And Jerk, and Clean And Jerk again, respectively. While only two samples are illustrated in this paper, similar results were observed for other test samples in the UCF-101 dataset indicating that these examples are not anomalous, and the method is general. The evidence that our method is an approximation is twofold. Firstly, the inequality between the sum of the spatial and temporal relevance, and the original relevance shows that the former are not true fractions of the latter. Furthermore, the fact that the spatial relevance for the non-dominant class (Pull Ups, when the model has decided Clean And Jerk) is more than the same frames explained as the dominant class (Pull Ups, when the model had decided Pull Ups) supports this. Conclusion In this paper we have introduced a new use case, for separating and visualising the spatial and temporal components of an explanation by deep Taylor decomposition, for a spatiotemporal CNN, that is easy to implement and takes relatively little extra computational cost. By exploiting a simple method of removing motion information from video input, we have essentially generated a negative mask that can be applied to an explanation that will remove the spatial component in the relevance. The resulting explanation provides much more insight into the salient motion in the input than the general relevance, which we show is noisy with misleading spatial relevance, i.e., most edges in the frame. While we expose an unsuitability in the current implementation of the deep Taylor method, for inputs with nonexchangeable dimensions, this work is ongoing. In the future, it will be necessary to formalise a method for exposing the true spatial relevance in the frame, as opposed to an approximation such as our method.
2,297
1908.01536
2965290354
Current techniques for explainable AI have been applied with some success to image processing. The recent rise of research in video processing has called for similar work n deconstructing and explaining spatio-temporal models. While many techniques are designed for 2D convolutional models, others are inherently applicable to any input domain. One such body of work, deep Taylor decomposition, propagates relevance from the model output distributively onto its input and thus is not restricted to image processing models. However, by exploiting a simple technique that removes motion information, we show that it is not the case that this technique is effective as-is for representing relevance in non-image tasks. We instead propose a discriminative method that produces a naive representation of both the spatial and temporal relevance of a frame as two separate objects. This new discriminative relevance model exposes relevance in the frame attributed to motion, that was previously ambiguous in the original explanation. We observe the effectiveness of this technique on a range of samples from the UCF-101 action recognition dataset, two of which are demonstrated in this paper.
Layer-wise relevance propagation (LRP) rules, as defined in @cite_10 , have found moderate success in explaining image recognition tasks. Multiple implementations and improvements have been made to these rules, with marginal winning probability (MWP) @cite_4 , to our knowledge being the first implementation of the rules. Deep Taylor decomposition, an implementation of LRP by the original authors themselves has become very popular, and as a result of its input-domain agnosticism, has been applied to other domains outside of image recognition, including activity recognition @cite_16 . It is for these reasons we choose the deep Taylor method as the exemplar technique for our proposed method
{ "abstract": [ "Compressed domain human action recognition algorithms are extremely efficient, because they only require a partial decoding of the video bit stream. However, the question what exactly makes these algorithms decide for a particular action is still a mystery. In this paper, we present a general method, Layer-wise Relevance Propagation (LRP), to understand and interpret action recognition algorithms and apply it to a state-of-the-art compressed domain method based on Fisher vector encoding and SVM classification. By using LRP, the classifiers decisions are propagated back every step in the action recognition pipeline until the input is reached. This methodology allows to identify where and when the important (from the classifier's perspective) action happens in the video. To our knowledge, this is the first work to interpret a compressed domain action recognition algorithm. We evaluate our method on the HMDB51 dataset and show that in many cases a few significant frames contribute most towards the prediction of the video to a particular class.", "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.", "We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks." ], "cite_N": [ "@cite_16", "@cite_10", "@cite_4" ], "mid": [ "2691573567", "1787224781", "2776207810" ] }
Discriminating Spatial and Temporal Relevance in Deep Taylor Decompositions for Explainable Activity Recognition
Recent success in solving image recognition problems can be attributed to the application to these problems of increasingly complex convolutional neural networks (CNNs) that make use of spatial convolutional feature extractors. This success has been closely followed by a call for explainability and transparency of these networks, which have inherently been regarded as black-boxes. Significant efforts have been made towards explaining decisions in image recognition tasks, with nearly all of these producing explanations through an image medium [Bach et al., 2015;Simonyan et al., 2013;Baehrens et al., 2010;Zhou et al., 2016]. Even more recently, the success in solving image recognition problems has been followed by the appearance of analogous video recognition models that make effective use of CNNs by extending the convolution dimensionality to be spatio-temporal or 3D [Ji et al., 2013;Carreira and Zisserman, 2017;Hara et al., 2018]. Intuitively, the same methods that have been successful in explaining image models can be applied to video. Many of these methods, notably the popular deep Taylor decomposition [Montavon et al., 2017], function on any input without modification. However, the additional temporal dimension is not conceptually similar and, hence, exchangeable with the two spatial dimensions in the input. This is not accounted for by the model, which simply convolves the input in all dimensions in the same manner. This is reflected in explanations using image-based methods like deep Taylor. A pixel, or voxel in this case, is not marked whether it is temporally or spatially relevant, but on the basis of its combined spatio-temporal relevance. By applying the deep Taylor method to a 3D convolutional network trained on the UCF-101 activity recognition dataset [Soomro et al., 2012], and additionally explaining the same class for each individual frame as a separate input, we effectively explain the spatial relevance of that frame. We show that by subtracting this from the original explanation, one can expose the underlying relevance attributed to motion in the frame. Thus we propose a new discriminative relevance model, which reveals the relevance attributed to motion, that was previously hidden by the accompanying spatial component. Spatial and Temporal Relevance in 3D CNNs 3D CNNs 3D CNNs extend the convolutional layers to the third dimension. In 3D CNNs, a sliding cube passes over a 3D block formed by the frames of the video stacked one on top of another, as opposed to a sliding 2D window passing over the image in 2D CNNs. This results not only in spatial features, but also features of motion, being learned. In the process of explaining the input, however, the relevance for the video is deconstructed into the original frames, which can be animated in the same manner as the input itself. Although the frames can be staggered, made transparent, and visualised as a block (see [Stergiou et al., 2019] for an example), the explanation is essentially viewed as a series of images; this is also the case with [Srinivasan et al., 2017]. In this manner it is impossible to distinguish the effect of the motion of the objects in the frame. At the same time, discerning whether a segment of the frame is considered relevant because of its shape and colour, its spatial information, or because of its relationship to similar areas in the neighbouring frames, i.e., motion, can be important for explaining the decisions a CNN makes. While one can infer important frames from the approach in [Srinivasan et al., 2017], this does not address the issue of spatially relevant objects that aren't visible in other frames, nor does it necessarily localise the temporally important regions. Separating Spatial and Temporal Components through Discriminative Relevance Distinguishing spatial and temporal relevance of pixels (voxels) when using 3D CNNs is not always possible. A kernel does not necessarily focus on either spatial or temporal features, but in most cases will have components in both. As a result, decoupling spatial and temporal relevance is not intuitive given only the original explanation. Instead, the motion can be removed from the input video. By performing multiple additional forward passes, which each take as an input a single frame from the original video, the temporal information can effectively be excluded from the model's decision. The resulting video would depict a freeze frame at that instant, and thus the model would only have the spatial information in the scene to infer from. By doing this, we can build up an image of the purely spatial relevance in each frame. The required additional computation scales linearly with the size of the temporal dimension of an input sample. Intuitively, by downweighting relevance in the original explanation by the spatial relevance reconstructed from each frame's explanation, what is left will be the relevance based on motion. Implementation All work presented in this article is implemented in Python, using the deep learning platform PyTorch, with the autograd functionality modified to apply LRP rules during backpropagation in place of the normal gradient calculation. The explain PyTorch extension The PyTorch autograd library provides the basis on which symbolic graphs are created. The Function class can be extended to create mathematical operations with custom gradient functions. We adapted this library to work with LRP techniques as the unofficial explain library 1 . On a forward pass, a model using the explain Function class will act in the same way as a PyTorch model using the autograd Function class. Weights are also loaded into the model in the same way as PyTorch models. The functionality differs on the backwards pass. The autograd library allows for custom backwards functions to be defined. Through this feature, we implemented convolutional, batch norm, pooling and linear layers whose gradient functions instead propagate relevance. In this way, we can generate an LRP explanation for a model given an input, by performing a forward pass, and then backpropagating the relevance, beginning at the chosen output neuron, back onto the input sample. Model The model is a 3D CNN following the C3D design from [Tran et al., 2015], the architecture for which is shown in Figure 1. The code is adapted from an implementation available at https://github.com/jfzhang95/pytorch-video-recognition. We fine-tuned the pretrained weights made available, to 75% validation accuracy on the UCF-101 dataset. Deep Taylor Decomposition Our implementation of deep Taylor decomposition follows the most up-to-date version found on https://github.com/albermax/innvestigate, a repository maintained by one of the lead authors on the LRP and deep Taylor papers. The implementation is summarised as follows: • As in the original deep Taylor paper, ReLU nonlinearities simply pass on relevance, without modifying it in any way. R k = R j • Relevance for pooling layers is generated by multiplying the input to the layer by the relevance w.r.t that input, calculated using regular backpropagation rules for pooling layers. For max-pooling: R k = δ k j R j Where δ k j is a mask of whether the neuron k was selected by the pooling kernel j. For average-pooling: R k = R j N j 1 This code is freely available at https://github.com/liamhiley/torchexplain Where N j is the number of values in the pooling kernel j. • The convolutional layers use the αβ relevance rule, which focuses explanations by injecting some negative relevance. R i = j (α z + ij i z + ij + b + j − β z − ij i z − ij + b − j )R j • The first convolutional layer (for which the input is the sample) uses the z β rule which makes use of the restricted input space (0 to 255 for pixel values) in finding a root point. R i = j z ij − l i w + ij − h i w − ij i z i j − l i w + i j − h i w − i j R j Padding Without the use of adaptive pooling, the single frame inputs would be too small to pass through the network. Still, this would result in more of the spatial information for the frame conserved than originally as part of the video. Instead, we padded the frame to the same size as the input video. We chose to pad the input by repeating the frame n times, where n a typical sample size-number, rather than use zero padding, which would create false temporal information through the near-instant change from all pixels to black. This is supported by the findings in [Hooker et al., 2018] where a similar issue arose when quantifying the accuracy of feature attribution techniques like LRP by replacing relevant pixels with zero-black pixels. Results In this section we show the result of subtracting spatial relevance from a deep Taylor explanation of an input video, featuring a person performing pull-ups on a bar. The three different explanations can be seen in Figure 2. The frames shown are processed, and explained as a three dimensional block, and displayed as two dimensional slices. In the original relevance ( Figure 2: Row 2), most of the scene is marked relevant, with heavy focus on edges. This observation is reinforced by the spatial relevance (Figure 2: Row 3), which in the absence of temporal information, highlights edges much more heavily. The agreement between the spatial and original explanations demonstrates the ambiguity in 3D explanations with gradient-based techniques like deep Taylor. It is unclear even with the spatial explanation as a reference, what in the scene is relevant for its motion, as every object is to a degree marked relevant. The difference becomes clearer with the inclusion of the temporal explanation. Subtracting the spatial explanation from the original explanation, shows a large amount of remaining relevance in the Figure 2: Left: The 1 st , 6 th , 10 th and 16 th frames respectively, from a 16-frame sample. Right: The 1 st , 2 nd , 3 rd and 4 th frames respectively, from a 16-frame sample. 1 st row: the original frames; 2 nd row: the deep Taylor explanation for the sample; 3 rd row: the spatial-only deep Taylor explanation for each frame; 4 th row: the remaining temporal explanation after subtracting (3 rd ) from (2 nd ). Red is positive relevance, blue is negative relevance, white is no relevance. core of the man's body and his head. The relevance in the background, the metal frame and the video watermark are all negative as a result, suggesting they are all highly spatially relevant. This effect is displayed for the beginning, end, and two intermediate frames of the sample. The bulk of the temporal relevance is found at the highest and lowest points of the exercise, but is absent from the intermediate frames. This suggests that the motion at key moments of the activity are at the lowest and highest points of the pull up, possibly due to the sharp change in movement, as at these points both the lowering and raising of the body are observed. This information is much more difficult to infer from the original explanation, where the 1 st and 16 th frames are overall much more heavily red (or relevant). In this example, key frames as well as salient regions are highlighted for motion. In the second example, of a person serving a tennis ball, the activity is observed at a more fine-grained framerate. Likely because of this the relevance of motion is more regular over the 4 neighbouring frames, when compared to the sparselysampled frames of the pull ups. This serves more to highlight temporally relevant objects in the scene. Specifically, the tennis ball and the person's upper body, where the swinging motion originates from. Again, in the original explanation this information is not clear. In fact, the ball and upper torso are relatively indistinguishable from spatially relevant features like the lawn and the building in the background. As seen in the spatial explanation, the relevance of these regions has a spatial component as well. Also interesting is the change in prediction by the model when given only spatial information. For the 1 st , 6 th , 10 th and 16 th frame-only inputs, the model predicted Wall Push-Ups, Golf Swing, Clean And Jerk, and Clean And Jerk again, respectively. While only two samples are illustrated in this paper, similar results were observed for other test samples in the UCF-101 dataset indicating that these examples are not anomalous, and the method is general. The evidence that our method is an approximation is twofold. Firstly, the inequality between the sum of the spatial and temporal relevance, and the original relevance shows that the former are not true fractions of the latter. Furthermore, the fact that the spatial relevance for the non-dominant class (Pull Ups, when the model has decided Clean And Jerk) is more than the same frames explained as the dominant class (Pull Ups, when the model had decided Pull Ups) supports this. Conclusion In this paper we have introduced a new use case, for separating and visualising the spatial and temporal components of an explanation by deep Taylor decomposition, for a spatiotemporal CNN, that is easy to implement and takes relatively little extra computational cost. By exploiting a simple method of removing motion information from video input, we have essentially generated a negative mask that can be applied to an explanation that will remove the spatial component in the relevance. The resulting explanation provides much more insight into the salient motion in the input than the general relevance, which we show is noisy with misleading spatial relevance, i.e., most edges in the frame. While we expose an unsuitability in the current implementation of the deep Taylor method, for inputs with nonexchangeable dimensions, this work is ongoing. In the future, it will be necessary to formalise a method for exposing the true spatial relevance in the frame, as opposed to an approximation such as our method.
2,297
1908.01536
2965290354
Current techniques for explainable AI have been applied with some success to image processing. The recent rise of research in video processing has called for similar work n deconstructing and explaining spatio-temporal models. While many techniques are designed for 2D convolutional models, others are inherently applicable to any input domain. One such body of work, deep Taylor decomposition, propagates relevance from the model output distributively onto its input and thus is not restricted to image processing models. However, by exploiting a simple technique that removes motion information, we show that it is not the case that this technique is effective as-is for representing relevance in non-image tasks. We instead propose a discriminative method that produces a naive representation of both the spatial and temporal relevance of a frame as two separate objects. This new discriminative relevance model exposes relevance in the frame attributed to motion, that was previously ambiguous in the original explanation. We observe the effectiveness of this technique on a range of samples from the UCF-101 action recognition dataset, two of which are demonstrated in this paper.
In addition to MWP, the authors in @cite_4 also show that removing relevance for the dual of the signal improves the focus of the explanation. This contrastive MWP (cMWP) effectively removes relevance to all classes, by explaining all other outputs at the second logits layer, leaving only relevance contributing to the chosen output neuron. Our method is similar to cMWP, in that we make use of subtraction of separate LRP signals to remove unwanted relevance. However, we backpropagate both signals through the network fully before subtracting. Where the cMWP method removes relevance towards all classes from the explanation, our method removes relevance towards spatially salient features in the frame, such as edges and background objects.
{ "abstract": [ "We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks." ], "cite_N": [ "@cite_4" ], "mid": [ "2776207810" ] }
Discriminating Spatial and Temporal Relevance in Deep Taylor Decompositions for Explainable Activity Recognition
Recent success in solving image recognition problems can be attributed to the application to these problems of increasingly complex convolutional neural networks (CNNs) that make use of spatial convolutional feature extractors. This success has been closely followed by a call for explainability and transparency of these networks, which have inherently been regarded as black-boxes. Significant efforts have been made towards explaining decisions in image recognition tasks, with nearly all of these producing explanations through an image medium [Bach et al., 2015;Simonyan et al., 2013;Baehrens et al., 2010;Zhou et al., 2016]. Even more recently, the success in solving image recognition problems has been followed by the appearance of analogous video recognition models that make effective use of CNNs by extending the convolution dimensionality to be spatio-temporal or 3D [Ji et al., 2013;Carreira and Zisserman, 2017;Hara et al., 2018]. Intuitively, the same methods that have been successful in explaining image models can be applied to video. Many of these methods, notably the popular deep Taylor decomposition [Montavon et al., 2017], function on any input without modification. However, the additional temporal dimension is not conceptually similar and, hence, exchangeable with the two spatial dimensions in the input. This is not accounted for by the model, which simply convolves the input in all dimensions in the same manner. This is reflected in explanations using image-based methods like deep Taylor. A pixel, or voxel in this case, is not marked whether it is temporally or spatially relevant, but on the basis of its combined spatio-temporal relevance. By applying the deep Taylor method to a 3D convolutional network trained on the UCF-101 activity recognition dataset [Soomro et al., 2012], and additionally explaining the same class for each individual frame as a separate input, we effectively explain the spatial relevance of that frame. We show that by subtracting this from the original explanation, one can expose the underlying relevance attributed to motion in the frame. Thus we propose a new discriminative relevance model, which reveals the relevance attributed to motion, that was previously hidden by the accompanying spatial component. Spatial and Temporal Relevance in 3D CNNs 3D CNNs 3D CNNs extend the convolutional layers to the third dimension. In 3D CNNs, a sliding cube passes over a 3D block formed by the frames of the video stacked one on top of another, as opposed to a sliding 2D window passing over the image in 2D CNNs. This results not only in spatial features, but also features of motion, being learned. In the process of explaining the input, however, the relevance for the video is deconstructed into the original frames, which can be animated in the same manner as the input itself. Although the frames can be staggered, made transparent, and visualised as a block (see [Stergiou et al., 2019] for an example), the explanation is essentially viewed as a series of images; this is also the case with [Srinivasan et al., 2017]. In this manner it is impossible to distinguish the effect of the motion of the objects in the frame. At the same time, discerning whether a segment of the frame is considered relevant because of its shape and colour, its spatial information, or because of its relationship to similar areas in the neighbouring frames, i.e., motion, can be important for explaining the decisions a CNN makes. While one can infer important frames from the approach in [Srinivasan et al., 2017], this does not address the issue of spatially relevant objects that aren't visible in other frames, nor does it necessarily localise the temporally important regions. Separating Spatial and Temporal Components through Discriminative Relevance Distinguishing spatial and temporal relevance of pixels (voxels) when using 3D CNNs is not always possible. A kernel does not necessarily focus on either spatial or temporal features, but in most cases will have components in both. As a result, decoupling spatial and temporal relevance is not intuitive given only the original explanation. Instead, the motion can be removed from the input video. By performing multiple additional forward passes, which each take as an input a single frame from the original video, the temporal information can effectively be excluded from the model's decision. The resulting video would depict a freeze frame at that instant, and thus the model would only have the spatial information in the scene to infer from. By doing this, we can build up an image of the purely spatial relevance in each frame. The required additional computation scales linearly with the size of the temporal dimension of an input sample. Intuitively, by downweighting relevance in the original explanation by the spatial relevance reconstructed from each frame's explanation, what is left will be the relevance based on motion. Implementation All work presented in this article is implemented in Python, using the deep learning platform PyTorch, with the autograd functionality modified to apply LRP rules during backpropagation in place of the normal gradient calculation. The explain PyTorch extension The PyTorch autograd library provides the basis on which symbolic graphs are created. The Function class can be extended to create mathematical operations with custom gradient functions. We adapted this library to work with LRP techniques as the unofficial explain library 1 . On a forward pass, a model using the explain Function class will act in the same way as a PyTorch model using the autograd Function class. Weights are also loaded into the model in the same way as PyTorch models. The functionality differs on the backwards pass. The autograd library allows for custom backwards functions to be defined. Through this feature, we implemented convolutional, batch norm, pooling and linear layers whose gradient functions instead propagate relevance. In this way, we can generate an LRP explanation for a model given an input, by performing a forward pass, and then backpropagating the relevance, beginning at the chosen output neuron, back onto the input sample. Model The model is a 3D CNN following the C3D design from [Tran et al., 2015], the architecture for which is shown in Figure 1. The code is adapted from an implementation available at https://github.com/jfzhang95/pytorch-video-recognition. We fine-tuned the pretrained weights made available, to 75% validation accuracy on the UCF-101 dataset. Deep Taylor Decomposition Our implementation of deep Taylor decomposition follows the most up-to-date version found on https://github.com/albermax/innvestigate, a repository maintained by one of the lead authors on the LRP and deep Taylor papers. The implementation is summarised as follows: • As in the original deep Taylor paper, ReLU nonlinearities simply pass on relevance, without modifying it in any way. R k = R j • Relevance for pooling layers is generated by multiplying the input to the layer by the relevance w.r.t that input, calculated using regular backpropagation rules for pooling layers. For max-pooling: R k = δ k j R j Where δ k j is a mask of whether the neuron k was selected by the pooling kernel j. For average-pooling: R k = R j N j 1 This code is freely available at https://github.com/liamhiley/torchexplain Where N j is the number of values in the pooling kernel j. • The convolutional layers use the αβ relevance rule, which focuses explanations by injecting some negative relevance. R i = j (α z + ij i z + ij + b + j − β z − ij i z − ij + b − j )R j • The first convolutional layer (for which the input is the sample) uses the z β rule which makes use of the restricted input space (0 to 255 for pixel values) in finding a root point. R i = j z ij − l i w + ij − h i w − ij i z i j − l i w + i j − h i w − i j R j Padding Without the use of adaptive pooling, the single frame inputs would be too small to pass through the network. Still, this would result in more of the spatial information for the frame conserved than originally as part of the video. Instead, we padded the frame to the same size as the input video. We chose to pad the input by repeating the frame n times, where n a typical sample size-number, rather than use zero padding, which would create false temporal information through the near-instant change from all pixels to black. This is supported by the findings in [Hooker et al., 2018] where a similar issue arose when quantifying the accuracy of feature attribution techniques like LRP by replacing relevant pixels with zero-black pixels. Results In this section we show the result of subtracting spatial relevance from a deep Taylor explanation of an input video, featuring a person performing pull-ups on a bar. The three different explanations can be seen in Figure 2. The frames shown are processed, and explained as a three dimensional block, and displayed as two dimensional slices. In the original relevance ( Figure 2: Row 2), most of the scene is marked relevant, with heavy focus on edges. This observation is reinforced by the spatial relevance (Figure 2: Row 3), which in the absence of temporal information, highlights edges much more heavily. The agreement between the spatial and original explanations demonstrates the ambiguity in 3D explanations with gradient-based techniques like deep Taylor. It is unclear even with the spatial explanation as a reference, what in the scene is relevant for its motion, as every object is to a degree marked relevant. The difference becomes clearer with the inclusion of the temporal explanation. Subtracting the spatial explanation from the original explanation, shows a large amount of remaining relevance in the Figure 2: Left: The 1 st , 6 th , 10 th and 16 th frames respectively, from a 16-frame sample. Right: The 1 st , 2 nd , 3 rd and 4 th frames respectively, from a 16-frame sample. 1 st row: the original frames; 2 nd row: the deep Taylor explanation for the sample; 3 rd row: the spatial-only deep Taylor explanation for each frame; 4 th row: the remaining temporal explanation after subtracting (3 rd ) from (2 nd ). Red is positive relevance, blue is negative relevance, white is no relevance. core of the man's body and his head. The relevance in the background, the metal frame and the video watermark are all negative as a result, suggesting they are all highly spatially relevant. This effect is displayed for the beginning, end, and two intermediate frames of the sample. The bulk of the temporal relevance is found at the highest and lowest points of the exercise, but is absent from the intermediate frames. This suggests that the motion at key moments of the activity are at the lowest and highest points of the pull up, possibly due to the sharp change in movement, as at these points both the lowering and raising of the body are observed. This information is much more difficult to infer from the original explanation, where the 1 st and 16 th frames are overall much more heavily red (or relevant). In this example, key frames as well as salient regions are highlighted for motion. In the second example, of a person serving a tennis ball, the activity is observed at a more fine-grained framerate. Likely because of this the relevance of motion is more regular over the 4 neighbouring frames, when compared to the sparselysampled frames of the pull ups. This serves more to highlight temporally relevant objects in the scene. Specifically, the tennis ball and the person's upper body, where the swinging motion originates from. Again, in the original explanation this information is not clear. In fact, the ball and upper torso are relatively indistinguishable from spatially relevant features like the lawn and the building in the background. As seen in the spatial explanation, the relevance of these regions has a spatial component as well. Also interesting is the change in prediction by the model when given only spatial information. For the 1 st , 6 th , 10 th and 16 th frame-only inputs, the model predicted Wall Push-Ups, Golf Swing, Clean And Jerk, and Clean And Jerk again, respectively. While only two samples are illustrated in this paper, similar results were observed for other test samples in the UCF-101 dataset indicating that these examples are not anomalous, and the method is general. The evidence that our method is an approximation is twofold. Firstly, the inequality between the sum of the spatial and temporal relevance, and the original relevance shows that the former are not true fractions of the latter. Furthermore, the fact that the spatial relevance for the non-dominant class (Pull Ups, when the model has decided Clean And Jerk) is more than the same frames explained as the dominant class (Pull Ups, when the model had decided Pull Ups) supports this. Conclusion In this paper we have introduced a new use case, for separating and visualising the spatial and temporal components of an explanation by deep Taylor decomposition, for a spatiotemporal CNN, that is easy to implement and takes relatively little extra computational cost. By exploiting a simple method of removing motion information from video input, we have essentially generated a negative mask that can be applied to an explanation that will remove the spatial component in the relevance. The resulting explanation provides much more insight into the salient motion in the input than the general relevance, which we show is noisy with misleading spatial relevance, i.e., most edges in the frame. While we expose an unsuitability in the current implementation of the deep Taylor method, for inputs with nonexchangeable dimensions, this work is ongoing. In the future, it will be necessary to formalise a method for exposing the true spatial relevance in the frame, as opposed to an approximation such as our method.
2,297
1908.01536
2965290354
Current techniques for explainable AI have been applied with some success to image processing. The recent rise of research in video processing has called for similar work n deconstructing and explaining spatio-temporal models. While many techniques are designed for 2D convolutional models, others are inherently applicable to any input domain. One such body of work, deep Taylor decomposition, propagates relevance from the model output distributively onto its input and thus is not restricted to image processing models. However, by exploiting a simple technique that removes motion information, we show that it is not the case that this technique is effective as-is for representing relevance in non-image tasks. We instead propose a discriminative method that produces a naive representation of both the spatial and temporal relevance of a frame as two separate objects. This new discriminative relevance model exposes relevance in the frame attributed to motion, that was previously ambiguous in the original explanation. We observe the effectiveness of this technique on a range of samples from the UCF-101 action recognition dataset, two of which are demonstrated in this paper.
Work on explainability methods outside of image tasks is still developing. Papers such as @cite_1 use feature visualisation techniques to provide insight into the models they have trained, but to our knowledge @cite_16 is still one of the only instances of an LRP based method applied to a video task. In this work, the difference between frames in relevance is highlighted by flattening the explanation block and plotting the overall relevance, which shows frames at certain points in an activity are more relevant overall. Saliency tubes, as proposed in @cite_9 , adapts the CAM technique of @cite_7 @cite_6 to localise salient motion in video frames. This method is the most similar to ours in that it highlights motion in 3D CNNs.
{ "abstract": [ "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them", "", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.", "We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL A demo and a video of the demo can be found at this http URL and youtu.be COjUB9Izk6E.", "Compressed domain human action recognition algorithms are extremely efficient, because they only require a partial decoding of the video bit stream. However, the question what exactly makes these algorithms decide for a particular action is still a mystery. In this paper, we present a general method, Layer-wise Relevance Propagation (LRP), to understand and interpret action recognition algorithms and apply it to a state-of-the-art compressed domain method based on Fisher vector encoding and SVM classification. By using LRP, the classifiers decisions are propagated back every step in the action recognition pipeline until the input is reached. This methodology allows to identify where and when the important (from the classifier's perspective) action happens in the video. To our knowledge, this is the first work to interpret a compressed domain action recognition algorithm. We evaluate our method on the HMDB51 dataset and show that in many cases a few significant frames contribute most towards the prediction of the video to a particular class." ], "cite_N": [ "@cite_7", "@cite_9", "@cite_1", "@cite_6", "@cite_16" ], "mid": [ "2950328304", "", "2619082050", "2616247523", "2691573567" ] }
Discriminating Spatial and Temporal Relevance in Deep Taylor Decompositions for Explainable Activity Recognition
Recent success in solving image recognition problems can be attributed to the application to these problems of increasingly complex convolutional neural networks (CNNs) that make use of spatial convolutional feature extractors. This success has been closely followed by a call for explainability and transparency of these networks, which have inherently been regarded as black-boxes. Significant efforts have been made towards explaining decisions in image recognition tasks, with nearly all of these producing explanations through an image medium [Bach et al., 2015;Simonyan et al., 2013;Baehrens et al., 2010;Zhou et al., 2016]. Even more recently, the success in solving image recognition problems has been followed by the appearance of analogous video recognition models that make effective use of CNNs by extending the convolution dimensionality to be spatio-temporal or 3D [Ji et al., 2013;Carreira and Zisserman, 2017;Hara et al., 2018]. Intuitively, the same methods that have been successful in explaining image models can be applied to video. Many of these methods, notably the popular deep Taylor decomposition [Montavon et al., 2017], function on any input without modification. However, the additional temporal dimension is not conceptually similar and, hence, exchangeable with the two spatial dimensions in the input. This is not accounted for by the model, which simply convolves the input in all dimensions in the same manner. This is reflected in explanations using image-based methods like deep Taylor. A pixel, or voxel in this case, is not marked whether it is temporally or spatially relevant, but on the basis of its combined spatio-temporal relevance. By applying the deep Taylor method to a 3D convolutional network trained on the UCF-101 activity recognition dataset [Soomro et al., 2012], and additionally explaining the same class for each individual frame as a separate input, we effectively explain the spatial relevance of that frame. We show that by subtracting this from the original explanation, one can expose the underlying relevance attributed to motion in the frame. Thus we propose a new discriminative relevance model, which reveals the relevance attributed to motion, that was previously hidden by the accompanying spatial component. Spatial and Temporal Relevance in 3D CNNs 3D CNNs 3D CNNs extend the convolutional layers to the third dimension. In 3D CNNs, a sliding cube passes over a 3D block formed by the frames of the video stacked one on top of another, as opposed to a sliding 2D window passing over the image in 2D CNNs. This results not only in spatial features, but also features of motion, being learned. In the process of explaining the input, however, the relevance for the video is deconstructed into the original frames, which can be animated in the same manner as the input itself. Although the frames can be staggered, made transparent, and visualised as a block (see [Stergiou et al., 2019] for an example), the explanation is essentially viewed as a series of images; this is also the case with [Srinivasan et al., 2017]. In this manner it is impossible to distinguish the effect of the motion of the objects in the frame. At the same time, discerning whether a segment of the frame is considered relevant because of its shape and colour, its spatial information, or because of its relationship to similar areas in the neighbouring frames, i.e., motion, can be important for explaining the decisions a CNN makes. While one can infer important frames from the approach in [Srinivasan et al., 2017], this does not address the issue of spatially relevant objects that aren't visible in other frames, nor does it necessarily localise the temporally important regions. Separating Spatial and Temporal Components through Discriminative Relevance Distinguishing spatial and temporal relevance of pixels (voxels) when using 3D CNNs is not always possible. A kernel does not necessarily focus on either spatial or temporal features, but in most cases will have components in both. As a result, decoupling spatial and temporal relevance is not intuitive given only the original explanation. Instead, the motion can be removed from the input video. By performing multiple additional forward passes, which each take as an input a single frame from the original video, the temporal information can effectively be excluded from the model's decision. The resulting video would depict a freeze frame at that instant, and thus the model would only have the spatial information in the scene to infer from. By doing this, we can build up an image of the purely spatial relevance in each frame. The required additional computation scales linearly with the size of the temporal dimension of an input sample. Intuitively, by downweighting relevance in the original explanation by the spatial relevance reconstructed from each frame's explanation, what is left will be the relevance based on motion. Implementation All work presented in this article is implemented in Python, using the deep learning platform PyTorch, with the autograd functionality modified to apply LRP rules during backpropagation in place of the normal gradient calculation. The explain PyTorch extension The PyTorch autograd library provides the basis on which symbolic graphs are created. The Function class can be extended to create mathematical operations with custom gradient functions. We adapted this library to work with LRP techniques as the unofficial explain library 1 . On a forward pass, a model using the explain Function class will act in the same way as a PyTorch model using the autograd Function class. Weights are also loaded into the model in the same way as PyTorch models. The functionality differs on the backwards pass. The autograd library allows for custom backwards functions to be defined. Through this feature, we implemented convolutional, batch norm, pooling and linear layers whose gradient functions instead propagate relevance. In this way, we can generate an LRP explanation for a model given an input, by performing a forward pass, and then backpropagating the relevance, beginning at the chosen output neuron, back onto the input sample. Model The model is a 3D CNN following the C3D design from [Tran et al., 2015], the architecture for which is shown in Figure 1. The code is adapted from an implementation available at https://github.com/jfzhang95/pytorch-video-recognition. We fine-tuned the pretrained weights made available, to 75% validation accuracy on the UCF-101 dataset. Deep Taylor Decomposition Our implementation of deep Taylor decomposition follows the most up-to-date version found on https://github.com/albermax/innvestigate, a repository maintained by one of the lead authors on the LRP and deep Taylor papers. The implementation is summarised as follows: • As in the original deep Taylor paper, ReLU nonlinearities simply pass on relevance, without modifying it in any way. R k = R j • Relevance for pooling layers is generated by multiplying the input to the layer by the relevance w.r.t that input, calculated using regular backpropagation rules for pooling layers. For max-pooling: R k = δ k j R j Where δ k j is a mask of whether the neuron k was selected by the pooling kernel j. For average-pooling: R k = R j N j 1 This code is freely available at https://github.com/liamhiley/torchexplain Where N j is the number of values in the pooling kernel j. • The convolutional layers use the αβ relevance rule, which focuses explanations by injecting some negative relevance. R i = j (α z + ij i z + ij + b + j − β z − ij i z − ij + b − j )R j • The first convolutional layer (for which the input is the sample) uses the z β rule which makes use of the restricted input space (0 to 255 for pixel values) in finding a root point. R i = j z ij − l i w + ij − h i w − ij i z i j − l i w + i j − h i w − i j R j Padding Without the use of adaptive pooling, the single frame inputs would be too small to pass through the network. Still, this would result in more of the spatial information for the frame conserved than originally as part of the video. Instead, we padded the frame to the same size as the input video. We chose to pad the input by repeating the frame n times, where n a typical sample size-number, rather than use zero padding, which would create false temporal information through the near-instant change from all pixels to black. This is supported by the findings in [Hooker et al., 2018] where a similar issue arose when quantifying the accuracy of feature attribution techniques like LRP by replacing relevant pixels with zero-black pixels. Results In this section we show the result of subtracting spatial relevance from a deep Taylor explanation of an input video, featuring a person performing pull-ups on a bar. The three different explanations can be seen in Figure 2. The frames shown are processed, and explained as a three dimensional block, and displayed as two dimensional slices. In the original relevance ( Figure 2: Row 2), most of the scene is marked relevant, with heavy focus on edges. This observation is reinforced by the spatial relevance (Figure 2: Row 3), which in the absence of temporal information, highlights edges much more heavily. The agreement between the spatial and original explanations demonstrates the ambiguity in 3D explanations with gradient-based techniques like deep Taylor. It is unclear even with the spatial explanation as a reference, what in the scene is relevant for its motion, as every object is to a degree marked relevant. The difference becomes clearer with the inclusion of the temporal explanation. Subtracting the spatial explanation from the original explanation, shows a large amount of remaining relevance in the Figure 2: Left: The 1 st , 6 th , 10 th and 16 th frames respectively, from a 16-frame sample. Right: The 1 st , 2 nd , 3 rd and 4 th frames respectively, from a 16-frame sample. 1 st row: the original frames; 2 nd row: the deep Taylor explanation for the sample; 3 rd row: the spatial-only deep Taylor explanation for each frame; 4 th row: the remaining temporal explanation after subtracting (3 rd ) from (2 nd ). Red is positive relevance, blue is negative relevance, white is no relevance. core of the man's body and his head. The relevance in the background, the metal frame and the video watermark are all negative as a result, suggesting they are all highly spatially relevant. This effect is displayed for the beginning, end, and two intermediate frames of the sample. The bulk of the temporal relevance is found at the highest and lowest points of the exercise, but is absent from the intermediate frames. This suggests that the motion at key moments of the activity are at the lowest and highest points of the pull up, possibly due to the sharp change in movement, as at these points both the lowering and raising of the body are observed. This information is much more difficult to infer from the original explanation, where the 1 st and 16 th frames are overall much more heavily red (or relevant). In this example, key frames as well as salient regions are highlighted for motion. In the second example, of a person serving a tennis ball, the activity is observed at a more fine-grained framerate. Likely because of this the relevance of motion is more regular over the 4 neighbouring frames, when compared to the sparselysampled frames of the pull ups. This serves more to highlight temporally relevant objects in the scene. Specifically, the tennis ball and the person's upper body, where the swinging motion originates from. Again, in the original explanation this information is not clear. In fact, the ball and upper torso are relatively indistinguishable from spatially relevant features like the lawn and the building in the background. As seen in the spatial explanation, the relevance of these regions has a spatial component as well. Also interesting is the change in prediction by the model when given only spatial information. For the 1 st , 6 th , 10 th and 16 th frame-only inputs, the model predicted Wall Push-Ups, Golf Swing, Clean And Jerk, and Clean And Jerk again, respectively. While only two samples are illustrated in this paper, similar results were observed for other test samples in the UCF-101 dataset indicating that these examples are not anomalous, and the method is general. The evidence that our method is an approximation is twofold. Firstly, the inequality between the sum of the spatial and temporal relevance, and the original relevance shows that the former are not true fractions of the latter. Furthermore, the fact that the spatial relevance for the non-dominant class (Pull Ups, when the model has decided Clean And Jerk) is more than the same frames explained as the dominant class (Pull Ups, when the model had decided Pull Ups) supports this. Conclusion In this paper we have introduced a new use case, for separating and visualising the spatial and temporal components of an explanation by deep Taylor decomposition, for a spatiotemporal CNN, that is easy to implement and takes relatively little extra computational cost. By exploiting a simple method of removing motion information from video input, we have essentially generated a negative mask that can be applied to an explanation that will remove the spatial component in the relevance. The resulting explanation provides much more insight into the salient motion in the input than the general relevance, which we show is noisy with misleading spatial relevance, i.e., most edges in the frame. While we expose an unsuitability in the current implementation of the deep Taylor method, for inputs with nonexchangeable dimensions, this work is ongoing. In the future, it will be necessary to formalise a method for exposing the true spatial relevance in the frame, as opposed to an approximation such as our method.
2,297
1908.00948
2966220025
Spurred by the potential of deep learning, computational music generation has gained renewed academic interest. A crucial issue in music generation is that of user control, especially in scenarios where the music generation process is conditioned on existing musical material. Here we propose a model for conditional kick drum track generation that takes existing musical material as input, in addition to a low-dimensional code that encodes the desired relation between the existing material and the new material to be generated. These relational codes are learned in an unsupervised manner from a music dataset. We show that codes can be sampled to create a variety of musically plausible kick drum tracks and that the model can be used to transfer kick drum patterns from one song to another. Lastly, we demonstrate that the learned codes are largely invariant to tempo and time-shift.
In addition to the VAE-based methods for control over music generation processes mentioned above, a number of other studies have applied deep learning methods to address the problem of music generation in general, as reviewed in @cite_19 . Drum track generation has been tackled using recurrent architectures @cite_4 @cite_15 , Restricted Boltzmann Machines @cite_5 , and Generative Adversarial Networks (GANs) @cite_3 . Approaches to the generation process may rely on sampling from some latent representation of the material to be generated @cite_20 @cite_11 , possibly in an incremental fashion @cite_6 , or conditioning on user-provided information (such as a style label @cite_13 , unary @cite_1 , or structural @cite_12 constraints). @cite_16 demonstrates style transfer for audio. GANs are used in @cite_3 @cite_2 , where the output of the generation process is determined by providing some (time-varying) noise, in combination with conditioning on existing material. Similar to our study, @cite_18 uses a GAE to model between musical material in an autoregressive prediction task. To our knowledge this is the first use of GAEs for conditional music generation.
{ "abstract": [ "", "Machine learning has shown a successful component of methods for automatic music composition. Considering music as a sequence of events with multiple complex dependencies on various levels of a composition, the long short-term memory-based (LSTM) architectures have been proven to be very efficient in learning and reproducing musical styles. The “rampant force” of these architectures, however, makes them hardly useful for tasks that incorporate human input or generally constraints. Such an example is the generation of drums’ rhythms under a given metric structure (potentially combining different time signatures), with a given instrumentation (e.g. bass and guitar notes). This paper presents a solution that harnesses the LSTM sequence learner with a feed-forward (FF) part which is called the “Conditional Layer”. The LSTM and the FF layers influence (are merged into) a single layer making the final decision about the next drums’ event, given previous events (LSTM layer) and current constraints (FF layer). The resulting architecture is called the conditional neural sequence learner (CNSL). Results on drums’ rhythm sequences are presented indicating that the CNSL architecture is effective in producing drums’ sequences that resemble a learnt style, while at the same time conform to given constraints; impressively, the CNSL is able to compose drums’ rhythms in time signatures it has not encountered during training (e.g. 17 16), which resemble the characteristics of the rhythms in the original data.", "Recurrent neural networks (RNNs) are now widely used on sequence generation tasks due to their ability to learn long-range dependencies and to generate sequences of arbitrary length. However, their left-to-right generation procedure only allows a limited control from a potential user which makes them unsuitable for interactive and creative usages such as interactive music generation. This article introduces a novel architecture called anticipation-RNN which possesses the assets of the RNN-based generative models while allowing to enforce user-defined unary constraints. We demonstrate its efficiency on the task of generating melodies satisfying unary constraints in the style of the soprano parts of the J.S. Bach chorale harmonizations. Sampling using the anticipation-RNN is of the same order of complexity than sampling from the traditional RNN model. This fast and interactive generation of musical sequences opens ways to devise real-time systems that could be used for creative purposes.", "", "This paper introduces DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces. We claim that, after being trained on the chorale harmonizations by Johann Sebastian Bach, our model is capable of generating highly convincing chorales in the style of Bach. DeepBach's strength comes from the use of pseudo-Gibbs sampling coupled with an adapted representation of musical data. This is in contrast with many automatic music composition approaches which tend to compose music sequentially. Our model is also steerable in the sense that a user can constrain the generation by imposing positional constraints such as notes, rhythms or cadences in the generated score. We also provide a plugin on top of the MuseScore music editor making the interaction with Deep-Bach easy to use.", "This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.", "Research on automatic music generation has seen great progress due to the development of deep neural networks. However, the generation of multi-instrument music of arbitrary genres still remains a challenge. Existing research either works on lead sheets or multi-track piano-rolls found in MIDIs, but both musical notations have their limits. In this work, we propose a new task called lead sheet arrangement to avoid such limits. A new recurrent convolutional generative model for the task is proposed, along with three new symbolic-domain harmonic features to facilitate learning from unpaired lead sheets and MIDIs. Our model can generate lead sheets and their arrangements of eight-bar long. Source code and audio samples of the generated result can be found at the project webpage: https: liuhaumin. github.io LeadsheetArrangement", "", "", "“Style transfer” among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.", "", "We introduce a method for imposing higher-level structure on generated, polyphonic music. A Convolutional Restricted Boltzmann Machine (C-RBM) as a generative model is combined with gradient des- cent constraint optimisation to provide further control over the genera- tion process. Among other things, this allows for the use of a “template” piece, from which some structural properties can be extracted, and trans- ferred as constraints to the newly generated material. The sampling pro- cess is guided with Simulated Annealing to avoid local optima, and to find solutions that both satisfy the constraints, and are relatively stable with respect to the C-RBM. Results show that with this approach it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence.", "The Variational Autoencoder (VAE) has proven to be an effective model for producing semantically meaningful latent representations for natural data. However, it has thus far seen limited application to sequential data, and, as we demonstrate, existing recurrent VAE models have difficulty modeling sequences with long-term structure. To address this issue, we propose the use of a hierarchical decoder, which first outputs embeddings for subsequences of the input and then uses these embeddings to generate each subsequence independently. This structure encourages the model to utilize its latent code, thereby avoiding the \"posterior collapse\" problem which remains an issue for recurrent VAEs. We apply this architecture to modeling sequences of musical notes and find that it exhibits dramatically better sampling, interpolation, and reconstruction performance than a \"flat\" baseline model. An implementation of our \"MusicVAE\" is available online at this http URL", "Discovering and exploring the underlying structure of multi-instrumental music using learning-based approaches remains an open problem. We extend the recent MusicVAE model to represent multitrack polyphonic measures as vectors in a latent space. Our approach enables several useful operations such as generating plausible measures from scratch, interpolating between measures in a musically meaningful way, and manipulating specific musical attributes. We also introduce chord conditioning, which allows all of these operations to be performed while keeping harmony fixed, and allows chords to be changed while maintaining musical \"style\". By generating a sequence of measures over a predefined chord progression, our model can produce music with convincing long-term structure. We demonstrate that our latent space model makes it possible to intuitively control and generate musical sequences with rich instrumentation (see this https URL for generated audio)." ], "cite_N": [ "@cite_18", "@cite_4", "@cite_1", "@cite_3", "@cite_6", "@cite_19", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2886204396", "2889880375", "2901638613", "2963681776", "2963575853", "2752134738", "2884558435", "2792826772", "", "2766465839", "", "2579406683", "2792210438", "2805697608" ] }
HIGH-LEVEL CONTROL OF DRUM TRACK GENERATION USING LEARNED PATTERNS OF RHYTHMIC INTERACTION
A crucial issue in music generation is that of user control. Especially for problems where musical material is to be generated conditioned on existing musical material (so-called conditional generation), it is not desirable for a system to produce its output deterministically. Typically there are multiple valid ways to complement existing material with new material, and a music generation system should reflect that degree of freedom, either by modeling it as a predictive distribution from which samples can be drawn and evaluated by the user, or by letting the generated material depend on some form of user input in addition to the existing material. An intuitive way to address this requirement is to learn a latent space, for example by means of a variational autoencoder (VAE). This approach has been successfully applied to music generation [1,2], and allows for both generation and manipulation of musical material by sampling from the latent prior, manual exploration of the latent space, or some form of local neighborhood search or interpolation. In this paper we also take a latent space learning approach to address the issue of control over music generation. More specifically, we propose a model architecture to learn a latent space that encodes rhythmic interactions of the kick drum vs. bass and snare patterns. The architecture is a convolutional variant of a Gated Autoencoder (GAE, see Section 3). This architecture can be thought of as a feed-forward neural network where the weights are modulated by learned mapping codes [3]. Each mapping code captures local relations between kick vs bass and snare inputs, such that an entire track is associated to a sequence of mapping codes. Since we want mapping codes to capture rhythmic patterns rather than just the instantaneous presence or absence of onsets in the tracks, during training we enforce invariance of mapping codes to (moderate) time shifts and tempo changes in the inputs. The resulting mapping codes remain largely constant throughout sections with a stable rhythm. This provides high-level control over the generated material in the sense that different kick drum patterns for some section can be realized simply by selecting a different mapping code (either by sampling or by inferring them from another section or song), and applying it throughout the section. To our knowledge this is a novel approach to music generation. It reconciles the notion of user control with the presence of conditioning material in a musically meaningful way: rather than controlling the characteristics of the generated material directly, it offers control over how the generated material relates to the conditioning material. Apart from quantitative experiments to show the basic validity of our approach, we validate our model by way of a set of sound examples and visualized outputs. We focus on three scenarios specifically. Firstly we demonstrate the ability to create a variety of plausible kick drum tracks for a given snare and bass track pair by sampling from a standard multivariate normal distribution in the mapping space. Secondly, we test the possibility of style-transfer, by applying rhythmic interaction patterns inferred from one song to induce similar patterns in other songs. Finally, we show that the semantics of the mapping space is invariant under changes in tempo. In continuation we present related work (Section 2), describe the proposed model architecture and data representations (Section 3), and validate the approach (Section 4). Section 5 provides concluding remarks and future work. METHOD A schematic overview of the proposed model architecture is shown in Figure 1. For time series modeling, we adapt the common dense GAE architecture to 1D convolution in time, yielding a Convolutional Gated Autoencoder (CGAE). We aim to model the rhythmic interactions between input signals x ∈ R M ×T and a target signal y ∈ R 1×T . More precisely, x represents M 1D signals of length T indicating onset functions of instrument tracks and beat-and downbeat information of a song, while y represents the onset function of a target instrument. Then the rhythmic interactions (henceforth referred to as mappings) between x and y are defined as m = W * (U * x · V * y),(1) where m ∈ R Q×T , and U ∈ R K×M ×R , V ∈ R K×1×R represent respectively K convolution kernels for M input maps and kernel size R, and W ∈ R Q×K×1 represents Q convolution kernels for K input maps and kernel size 1. Furthermore, * is the convolution operator and · is the Hadamard product. For brevity the notation above assumes a CGAE architecture with only one mapping layer and one layer for input and target. In practice we use several convolutional layers, as described in Section 3.1. Given the rhythmic interactions m and the rhythmic context x, the target onset function is reconstructed as y = V * (U * x · W * m),(2) where the transposed kernels V and W result in a deconvolution. The model parameters are trained by minimizing the mean squared error L mse (y,ỹ) between the target signal y and its reconstructionỹ. In order to draw samples from the model, we want to impose a Gaussian prior over m resulting in p(m) = N (0, I). Additionally, m should apply to any input x, and should therefore not contain any information about the content of x. These conditions are imposed using adversarial training [16]: A discriminator D(·) estimates whether its input is drawn from a Gaussian distribution and contains no information about x. To that end, we concatenate (U * x) with either actual mappings m or noise drawn from an independent Gaussian distribution η ∼ N (0, I), η ∈ R Q×T . This results in D(m, (U * x)) and D(η, (U * x)). In adversarial training, the discriminator D(·) learns to distinguish between the input containing m and the input containing η. If there is mutual information between (U * x) and m the discriminator can exploit this for its classification task. This causes the encoding pathways to remove any information about x from m. Also, we obtain m ∼ N (0, I). Accordingly, the discriminator is trained to minimize the loss CNN x CNN D m ( U * x ) ( U * x ) y / ỹ 1x1 convs ( V * y ) η ( U * x ) η Draw from  ( 0 , I )L advers = 1 T t D(m, (U * x)) t − D(η, (U * x)) t ,(3) with D(·) t being the output of the discriminator at time t. To make the mappings more constant over time, an additional loss penalizes differences of successive mappings L const = 1 T t (m t − m t+1 ) 2 , m t ∈ R Q . A further loss that constrains each map m q ∈ R T to have zero mean and unit variance over time and instances in a batch considerably improves the learning of the CGAE: L std = 1 Q Q q 1 N N i (m q,i − µ q ) 2 − 1 2 + µ 2 q ,(4) where m q,i are the observations of convolutional map m q over all time steps and instances in a batch, and µ q is the mean of m q,i . Optimization is performed in two steps per mini-batch. First, the discriminator is trained to minimize L advers , then the CGAE is trained to minimize L mse (y,ỹ) + L const + L std − L advers . [17] between layers (also for the deconvolution passes). The model is trained for 2500 epochs with batch size 100, using 50% dropout on the inputs x. During training, a data augmentation based regularization method is used to make the mappings invariant to time shift and tempo change. Architecture and training details To that end we define a transformation function ψ θ (z) that shifts and scales a signal z in the time dimension with random shifts between −150 and +150 time steps (±1.75s) and random scale factors between 0.8 and 1.2. Training is then performed as follows. First, the mappings m are inferred according to Eq. 1. Then, the input signals are modified using ψ θ (·) resulting in an altered Eq. 2: y ψ θ = V * (U * ψ θ (x) · W * m).(5) Finally, the mean squared error between the such obtained reconstruction and the transformed target is minimized as L mse (ψ θ (y),ỹ ψ θ ). This approach was first proposed in [18]. Due to the gating mechanism · (activating only pathways with appropriate tempo and time-shift), a CGAE is particularly suited for learning such invariances. By imposing time-shift invariance, we assume that rhythmic interaction patterns (and the respective mappings) in the training data are locally constant. Even if this method introduces some error at positions where rhythmic patterns change, most of the time the assumption of locally constant rhythm is valid. Data representation The training/validation sets consist of 665/193 pop/rock/ electro songs where the rhythm instruments bass, kick and snare are available as separate 44.1kHz audio tracks. The context signals x consist of two 1D input maps for beat and downbeat probabilities, and two 1D input maps for the onset functions of Snare and Bass. The target signal y consists of a 1D onset function of the Kick drum. The onset functions are extracted using the ComplexDomainOnsetDetection feature of the publicly available Yaafe library 1 with a block size of 1024, a step size of 512, and a Hann window function. For the downbeat functions we use the downbeat estimation RNN of the madmom library 2 . Input signals are individually standardized to zero mean and unit variance over each song. Rendering Audio We create an actual kick drum track from an onset strength curve y using salient peak picking. First, we remove less salient peaks from y by zero-phase filtering with a low-pass Butterworth filter of order two and a critical frequency of half the Nyquist frequency. The local maxima of the smoothed curve are then thresholded, discarding all maxima below a certain proportion (see below) of the maximum of the standardized onset strength curve. The remaining peaks are selected as onset positions. Finally, we render an audio file by placing a "one-shot" drum sample on all remaining peaks after thresholding. We introduce dynamics by choosing the volume of the sample from 70% for peaks at the threshold to 100% for peaks with the maximum value. For the qualitative experiments in the following section, we manually choose the threshold between 15% and 50%. For the quantitative results in Table 1 we fix the threshold at 25%, but values of 20% and 30% yield similar figures. EXPERIMENTS For the qualitative experiments we use four songs, Gipsy Love, Orgs Waltz, Miss You and Drehscheibe, produced by the first author. We encourage the reader to listen to the results on the accompanying web page 3 . Three scenarios are chosen to show the effectiveness of the proposed approach: Conditional Generation of Drum Patterns To generate a kick drum track, we sample only one mapping code m t (from a 16-dimensional standard Gaussian), repeat it across the time dimension, and reconstruct y given the resulting m, as well as x. Subsequently, we render 20 audio files as described in Section 3.3 and pick those 10 which together constitute the most varied set. Figure 2 shows some results of the generation task -randomly generated kick drum tracks conditioned on the song Drehscheibe (the sound examples are available online). It is clear from these screenshots that the model generates a wide variety of different rhythmic patterns which adapt to the local context, even though the sampled mapping code is constant (repeated) over time. Style Transfer First, for a given song, we infer m from x and y. Second, k-means clustering is performed over all m t , using the Davis-Bouldin score [19] for determining the optimal number of clusters (typically yielding an optimal k between 5 and 8). Then we use the cluster center of the largest cluster found as the mapping code, again repeat it over time and use it for another song onto which the style should be transferred. Again the results are available on the accompanying web page (see above). Tempo-invariance We use the WSOLA time stretching algorithm [20] as implemented in sox, to create four time stretched versions of each song, at 80%, 90%, 110% and 120% of the original tempo, respectively. Then, for a given song in original tempo, we determine a prototypical mapping code with the k-means clustering method described above. We repeat that code througout the time-stretched versions of the song and reconstruct y given m and x. Figure 3 shows generated kick drum tracks in the five different tempos (four time-stretched versions plus the original tempo). It is clear from these screenshots that the drum pattern adjusts to the different tempos and does not change its style. Although it is not obvious how to evaluate the output of the model other than by listening, we can check the validity of basic assumptions about the behavior of the model. One assumption is that the ground truth mappings for a song-as defined in Eq. (1)-allow us to reconstruct the drum track relatively faithfully (Eq. (2)). To test the degree to which reconstruction may be sacrificed to satisfy other constraints (e.g., the adversarial loss) we compute the accuracy of the reconstruction. Given onset strength curves y andỹ we determine the onset positions as described in Section 3.3, and compute the precision, recall, and F-score using a 50 ms tolerance window, following MIREX onset detection evaluation criteria [21]. Table 1: Average precision, recall, and F-score for onset reconstruction using ground truth and style transfer mappings. enough to largely reconstruct the target onsets correctly. The reconstructions are not perfect, likely due to the model's invariance and the adversarial loss on the mappings. Note also that the accuracy for the validation set is similar to that for the training set, implying that no overfitting has occurred. The dominance of precision over recall is likely due to the typical "conservative" behavior of GAEs [22]. Furthermore we test the validity of the heuristic of taking the largest cluster centroid as a constant mapping vector over time for style transfer (assuming time-invariance). To do so, we apply this heuristic to transfer the style of a song to itself. That is, to reconstruct the kick drum track we use the largest mode of the song in the mapping space as a constant through time, rather than the ground truth mapping-a trajectory through the mapping space. Unsurprisingly, this approximation affects the reconstruction of the original kick drum track negatively, but the F-scores of over 0.7 still shows that a substantial part of the tracks is reconstructed correctly. CONCLUSIONS AND FUTURE WORK We have presented a model for the conditional generation of kick drums tracks given snare and bass tracks in pop/rock/electro music. The model was trained on a dataset of multi-track recordings, using a custom objective function to capture the relationship between onset patterns in the tracks of the same song in mapping codes. We have shown that the mapping codes are largely tempo and time-invariant and that musically plausible kick drum tracks can be generated given a snare and bass track either by sampling a mapping code or through style transfer, by inferring the mapping code from another song. Importantly, two basic aspects of the chosen approach have been shown to be valid. Firstly, the ground-truth mapping codes are able to faithfully reconstruct the original kick drum track. Secondly, the style transfer heuristic of applying a constant mapping code through time was shown to be largely valid, by comparing the original kick drum track of a song to the result of applying the style of a song to itself. Although the current work is limited in the sense that the model has only been demonstrated for kick drum track generation, we believe this approach is applicable to other content. We are currently applying the same approach to snare drum generation and f 0-generation for bass tracks. ACKNOWLEDGEMENTS We thank Cyran Aouameur for his valuable support, as well as Adonis Storr, Tegan Koster, Stefan Weißenberger and Clemens Riedl for their contribution in producing the example tracks.
2,716
1908.00355
2966623897
In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for continual learning, effectively utilizing the previously acquired skills. As such, the key challenge is to transfer and generalize the knowledge learned from one task to other tasks, avoiding forgetting and interference of previous knowledge and improving the overall performance. In this paper, within the continual learning paradigm, we introduce a method that effectively forgets the less useful data samples continuously and allows beneficial information to be kept for training of the subsequent tasks, in an online manner. The method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a continual or life-long learning property. This effectively maintains a constant training size across all tasks. We first provide mathematical intuition for the method and then demonstrate its effectiveness in avoiding catastrophic forgetting and computational efficiency on continual learning of classification tasks when compared with the existing state-of-the-art techniques.
Recently, a number of approaches have been proposed to adapt a DNN model to the continual learning setting, from an adaptive model architecture perspective such as adding columns or neurons for new tasks @cite_4 @cite_6 @cite_9 ; model parameter adjustment or regularization techniques like, imposing restrictions on parameter updates @cite_23 @cite_1 @cite_32 @cite_33 ; memory revisit techniques which ensure model updates towards the optimal directions @cite_20 @cite_42 @cite_14 ; Bayesian approaches to model continuously acquired information @cite_28 @cite_31 @cite_24 ; or on broader domains with approaches targeted at different setups or goals such as few-shot learning or transfer learning @cite_16 @cite_26 .
{ "abstract": [ "", "Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index.", "", "", "We introduce a framework for continual learning based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for continual learning, avoids forgetting a previous task by constructing and memorising an approximate posterior belief over the underlying task-specific function. To achieve this we rely on a Gaussian process obtained by treating the weights of the last layer of a neural network as random and Gaussian distributed. Then, the training algorithm sequentially encounters tasks and constructs posterior beliefs over the task-specific functions by using inducing point sparse Gaussian process methods. At each step a new task is first learnt and then a summary is constructed consisting of (i) inducing inputs and (ii) a posterior distribution over the function values at these inputs. This summary then regularises learning of future tasks, through Kullback-Leibler regularisation terms, so that catastrophic forgetting of earlier tasks is avoided. We demonstrate our algorithm in classification datasets, such as Split-MNIST, Permuted-MNIST and Omniglot.", "", "", "", "", "", "", "Abstract The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.", "", "We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.", "One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art." ], "cite_N": [ "@cite_14", "@cite_4", "@cite_33", "@cite_26", "@cite_28", "@cite_9", "@cite_42", "@cite_1", "@cite_32", "@cite_6", "@cite_24", "@cite_23", "@cite_31", "@cite_16", "@cite_20" ], "mid": [ "", "2426267443", "", "", "2951265532", "", "", "", "", "", "", "2560647685", "", "2604763608", "2962724315" ] }
Continual Learning via Online Leverage Score Sampling
It is a typical practice to design and optimize machine learning (ML) models to solve a single task. On the other hand, humans, instead of learning over isolated complex tasks, are capable of generalizing and transferring knowledge and skills learned from one task to another. This ability to remember, learn and transfer information across tasks is referred to as continual learning [36,31,12,27]. The major challenge for creating ML models with continual learning ability is that they are prone to catastrophic forgetting [22,23,11,7]. ML models tend to forget the knowledge learned from previous tasks when re-trained on new observations corresponding to a different (but related) task. Specifically when a deep neural network (DNN) is fed with a sequence of tasks, the ability to solve the first task will decline significantly after training on the following tasks. The typical structure of DNNs by design does not possess the capability of preserving previously learned knowledge without interference between tasks or catastrophic forgetting. In order to overcome catastrophic forgetting, a learning system is required to continuously acquire knowledge from the newly fed data as well as to prevent the training of the new data samples from destroying the existing knowledge. In this paper, we propose a novel approach to continual learning with DNNs that addresses the catastrophic forgetting issue, namely a technique called online leverage score sampling (OLSS). In OLSS, we progressively compress the input information learned thus far, along with the input from current task and form more efficiently condensed data samples. The compression technique is based on the statistical leverage scores measure, and it uses the concept of frequent directions in order to connect the series of compression steps for a sequence of tasks. When thinking about continual learning, a major source of inspiration is the ability of biological brains to learn without destructive interference between older memories and generalize knowledge across multiple tasks. In this regard, the typical approach is enabling some form of episodic-memory in the network and consolidation [22] via replay of older training data. However, this is an expensive process and does not scale well for learning large number of tasks. As an alternative, taking inspiration from the neuro-computational models of complex synapses [1], recent work has focused on assigning some form of importance to parameters in a DNN and perform task-specific synaptic consolidation [14,39]. Here, we take a very different view of continual learning and find inspiration in the brains ability for dimensionality reduction [26] to extract meaningful information from its environment and drive behavior. As such, we enable such progressive dimensionality reduction (in terms of number of samples) of previous task data combined with new task data in order to only preserve a good summary information (discarding the less relevant information or effective forgetting) before further learning. Repeating this process in an online manner we enable continual learning for a large sequence of tasks. Much like our brains, a central strategy employed by our method is to strike a balance between dimensionality reduction of task specific data and dimensionality expansion as processing progresses throughout the hierarchy of the neural network [8]. Online Leverage Score Sampling Before presenting the idea, we first setup the problem: Let {(A 1 , B 1 ), (A 2 , B 2 ), ..., (A i , B i ), ...} represent a sequence of tasks, each task consists of n i data samples and each sample has a feature dimension d and an output dimension m, i.e., input A i ∈ R ni×d and true output B i ∈ R ni×m . Here, we assume the feature and output dimensions are fixed for all tasks 1 . The goal is to train a DNN over the sequence of tasks and ensure it performs well on all of them, without catastrophic forgetting. Here, we consider that the network's architecture stays the same and the tasks are received in a sequential manner. Formally, with f representing a DNN, our objective is to minimize the loss 2 : min f f (A) − B 2 2 where A =       A 1 A 2 ... A i ...       and B =       B 1 B 2 ... B i ...       . ( Under this setup, we look at some of the existing models: Online EWC trains f on the ith task (A i , B i ) with a loss function containing additional penalty terms min f f (A i ) − B i 2 2 + i−1 j=1 w p=1 λF j p (θ p − θ j * p ) 2 , where λ indicates the importance level of the previous tasks compared to task i, F j p represents the pth diagonal entry of the Fisher information matrix for Task j, w represents the number of parameters in the network, θ p corresponds to the pth model parameter for the current task and θ j * p is the pth model parameter value for the jth task. Alternately, GEM maintains an extra memory buffer containing data samples from each of the previous tasks M k with k < i. It trains on the current task (A i , B i ) with a regular loss function, but subject to inequalities on each update of f (update on each parameter θ), min f f (A i ) − B i 2 2 s. t. ∂ f θ (A i ) − B i 2 2 ∂θ , ∂ f θ (A M k ) − B M k 2 2 ∂θ ≥ 0 for all k < i. Our approach The new method OLSS, different from either method above, targets to find an approximation of A in a streaming (online) manner, i.e., form a sketch i ∈ R ×d to approximate [A T 1 A T 2 · · · A T i ] T ∈ R (n1+...+ni)×d such that the resultinĝ f i := arg min f f ( i ) −B i 2 2 is likely to perform on all tasks as good as f * i := arg min f f ([A T 1 A T 2 · · · A T i ] T ) − [B T 1 B T 2 · · · B T i ] T 2 2 .(2) In order to avoid extra memory and computation cost during the training process, we could set the approximate i to have the same number of rows (number of data samples) as the current task A i . Equation (1) and (2) represent nonlinear least squares problems. It is to be noted that a nonlinear least squares problem can be solved with an approximation deduced from an iteration of linear least squares problems with J T J∆θ = J T ∆B where J is the Jacobian of f at each update (using the Gauss-Newton method). Besides this technique, there are various other approaches in addressing this problem. Here we adopt a cost effective simple randomization technique -leverage score sampling, which has been used extensively in solving large scale linear least squares and low rank approximation problems [2,5,37]. Statistical Leverage Score and Leverage Score Sampling Statistical leverage scores measure the non-uniformity structure of a matrix and a higher score indicates a heavier weight of the row contributing to the non-uniformity of the matrix. It has been widely used for outlier detection in statistical data analysis. In recent applications [5,37], it also emerges as a fundamental tool for constructing randomized matrix sketches. Given a matrix A ∈ R n×d , a sketch of A is another matrix B ∈ R ×d where is significantly smaller than n but still approximates A well, more specifically, Definition 1 [5] Given a matrix A ∈ R n×d with n > d, let U denote the n × d matrixA T A − B T B 2 ≤ ε A 2 2 . Theoretical accuracy guarantees have been derived for random sampling methods based on statistical leverage scores [37,20]. Considering our setup which is to approximate a matrix for solving a least squares problem and also the computational efficiency, we adopt the following leverage score based sampling method: Given a sketch size , define a distribution {p i , ..., p n } 3 with p i = U (i,:) 2 2 d , the sketch is formed by independently and randomly selecting rows of A without replacement, where the ith row is selected with probability p i . Based on this, we are able to select the samples that contributes the most to a given dataset. The remaining problem is to embed it in a sequence of tasks and still generate promising approximations to solve the least squares problem. In order to achieve that, we make use of the concept of frequent directions. Frequent Directions Frequent directions extends the idea of frequent items in item frequency approximation problem to a matrix [18,10,34] and it is also used to generate a sketch for a matrix, but in a data streaming environment. As the rows of A ∈ R n×d are fed in one by one, the original idea of frequent directions is to first perform Singular Value Decomposition (SVD) on the first 2 rows of A and shrink the top singular values by the same amount which is determined by the ( + 1)th singular value, and then save the product of the shrunken top singular values and the top right singular vectors as a sketch for the first 2 rows of A. With the next rows fed in, append them behind the sketch and perform the shrink and product. This process is repeated until reaching the final sketch ∈ R ×d for A ∈ R n×d . Different from the leverage score sampling sketching technique, a deterministic bound is guaranteed for the accuracy of the sketch: A T A − T 2 2 ≤ A − A k 2 F /( − k) with l > k and A k denotes best rank-k approximation of A [18,10]. Inspired by the routine of frequent directions in a streaming data environment, our OLSS method is constructed as follows: First initialize a 'sketch' matrix ∈ R ×d and a correspondingB ∈ R ×m . For the first task (A 1 ∈ R n1×d , B 1 ∈ R n1×m ), we randomly select rows of A and (the corresponding rows of) B without replacement according to the leverage score sampling defined above with probability distribution based on A's leverage scores, then train the model on the sketch (Â,B); after seeing Task 2, we append (A 2 , B 2 ) to the sketch (Â,B) respectively and again randomly select out of + n 2 data samples according to the leverage score sampling with the probability distribution based on the leverage scores of [ T , A T 2 ] T ∈ R ( +n2)×d , and form a new sketch ∈ R ×d andB ∈ R ×d , then train on it. This process is repeated until the end of the task sequence. We present the step by step procedure in Algorithm 1. 3 Since d = U 2 F = n i=1 U (i, Main Algorithm The original idea of leverage score sampling and frequent directions both have the theoretical accuracy bounds with the sketch on the error term A T A − T . The bounds show that the sketcĥ A contains the relevant information used to form the covariance matrix of all the data samples A T A, in other words, the sketch captures the relationship among the data samples in the feature space (which is of dimension d). For a sequence of tasks, it is common to have noisy data samples or interruptions among samples for different tasks. The continuous update of important rows in a matrix (data samples for a sequence of tasks), or the continuous effective forgetting of less useful rows may serve as a filter to remove the unwanted noise. Different from most existing methods, Algorithm 1 does not work directly with the training model, instead it could be considered as data pre-processing which constantly extracts useful information from previous and current tasks. Because of its parallel construction, OLSS could be combined with all the aforementioned algorithms to further improve its performance. 8: Randomly select rows of andB without replacement based on probability Uj,: 2 2 / U 2 F for j ∈ {1, ..., ni + } (or j ∈ {1, ..., ni} when i = 1) and set them as andB respectively. 9: Train the model with ∈ R ×d andB ∈ R ×m . Regarding the computational complexity, when n i is large, the SVD of ∈ R (ni+ )×d in Step 6 is computationally expensive which takes O((n i + )d 2 ) time. This procedure is for the computation of leverage scores which can be sped up significantly with various leverage score approximation techniques in the literature [5,2,29], such as through the randomized algorithm in [5], the leverage scores for could be approximated in O((n i + )d log(n i + )) time. However, one possible drawback of the above procedure is that the relationship represented in a covariance matrix is linear, so any underlying nonlinear connections among the data samples may not be fully captured in the sketch. Furthermore, the structure of the function f would also affect the information required to be kept in the sketch in order to perform well on solving the least squares problem in (2). As such, there may exist certain underlying dependency of a data sample's importance on the DNN model architecture. This remains a future research direction. Experiments We evaluate the performance of the proposed algorithm OLSS on three classification tasks used as benchmarks in related prior work. • Rotated MNIST [19]: a variant of the MNIST dataset of handwriten digits [16], the digits in each task are rotated by a fixed angle between 0 • to 180 • . The experiment is on 20 tasks and each task consists of 60, 000 training and 10, 000 testing samples. • Permutated MNIST [14]: a variant of the MNIST dataset [16], the digits in each task are transformed by a fixed permutation of pixels. The experiment is on 20 tasks and each task consists of 60, 000 training and 10, 000 testing samples. • Incremental CIFAR100 [28,39]: a variant of the CIFAR object recognition dataset with 100 classes [15]. The experiment is on 20 tasks and each task consists of 5 classes; each task consists of 2, 500 training and 500 testing samples. Where, each task introduces a new set of classes; for a total number of 20 tasks, each new task concerns examples from a disjoint subset of 5 classes. In the setting of [19] for incremental CIFAR100, a softmax layer is added to the output vector which only allows entries representing the 5 classes in the current task to output values larger than 0. In our setting, we allow the entries representing all the past occurring classes to output values larger than 0. We believe this is a more natural setup for continual learning. For the aforementioned experiments, we compare the performance of the following algorithms: • A simple SGD predictor. • EWC [14], as discussed earlier in Section 2. • GEM [19], as discussed earlier in Section 2. • iCaRL [28], it classifies based on a nearest-mean-of-exemplars rule, keeps an episodic memory and updates its exemplar set continuously to prevent catastrophic forgetting. It is only applicable to incremental CIFAR100 experiment due to its requirement on the same input representation across tasks. • OLSS (ours). In addition to these, experiments were also conducted using SI [39] and the same three tasks. However, no significant improvement in performance and a sensitivity to learning rate parameter choice was observed, with learning ability being relatively better than online EWC. As such we don't show SI performance in our plots. It can however be tested using our open sourced code 4 for this paper. The competing algorithms SGD, EWC, GEM and iCaRL were implemented based on the publicly available code from the original authors of the GEM paper [19]; a plain SGD optimizer is used for all algorithms. The DNN used for rotated and permuted MNIST is an MLP with 2 hidden layers and each with 400 rectified linear units; whereas a smaller version of ResNet18 [13], with three times less feature maps across all layers is used for the incremental CIFAR100 experiment. We train 5 epochs with batch size 200 on rotated and permuted MNIST datasets and 10 epochs with batch size 100 on incremental CIFAR100. The regularization and memory hyper-parameters in EWC, iCaRL and GEM were set as described in [19]. The space parameter for our OLSS algorithm was set to be equal to the number of samples in each task. The learning rate for each algorithm was determined through a grid Results To evaluate the performance of different algorithms, we examine • The average test accuracy (Figure 1 (left)), defined as 1 k k i=1 Acc(task i) after training x = k tasks. • Task 1's test accuracy (Figure 1 (right)), defined as Acc(task 1) after training x = k tasks. • Wall clock time ( Table 1). As observed from Figure 1 (left) across the three benchmarks, OLSS achieves similar average task accuracy or slightly higher compared to GEM and clearly outperforms SGD, EWC and iCaRL. This demonstrates the the ability of OLSS for continuously selecting useful data samples with progressive learning to overcome the catastrophic forgetting issue. In terms of maintaining the performance of the earliest task (Task 1) after training a sequence of tasks, OLSS shows the most robust performance at par with GEM on rotated and permutated MNIST, and slightly worse than GEM as the number of tasks increases in case of incremental CIFAR100. However, both these methods, significantly outperform SGD, EWC and iCaRL. In order to compare the computational time complexity across the methods, we report the walk clock time in Table 1. Noticeably, SGD is the fastest among all the algorithms, however performs the worst as observed in Figure 1, then followed by OLSS and EWC (only in the case of CIFAR100, EWC is relatively faster than OLSS). The algorithms iCaRL and GEM both demand much higher computational costs, with GEM being significantly slow compared to the rest. This behavior is expected due to the requirement of additional constraint validation and at certain occasions, a gradient projection step (in order to correct for constraint violations across data samples from previously learned tasks stored in the memory buffer) in GEM (see Section 3 in [19]). As such although the buffered replay-memory based approach in GEM prevents catastrophic forgetting, the computational cost becomes prohibitively slow to be performed online while training DNNs on sequential multi-task learning scenarios. Based on the performance and computational efficiency on all three datasets, OLSS emerges as the most favorable among the current state of the art algorithms for continual learning. Discussions The space parameter of OLSS ( in Algorithm 1) could be varied to balance its accuracy and efficiency. Here the choice of = n i (number of samples in current task) is selected such that the number of training samples would be standardized across all algorithms, enabling effective compression and extraction of data samples for OLSS in a straightforward comparison. However, it is to be noted that if = n i , OLSS indeed requires some additional memory in order to compute the SVD of concatenated sketch of previous tasks and the current task. Unless, the algorithm is run in an edge computing environment with limited memory on chip, this issue could be ignored. On the other hand, GEM and iCaRL keep an extra episodic memory throughout the training process. Memory size was set to be 256 for GEM and 1280 for iCaRL by considering the accuracy and efficiency in the experiments. Variations on the size of the episodic memory would also affect their performance as well as the running time. As described earlier, GEM requires a constraint validation step and a potential gradient projection step for every update of the model parameters. As such the computational time complexity in this case is proportional to the product of the number of samples kept in the episodic memory, the number of parameters in the model and the number of iterations required to converge. In contrast, OLSS uses a SVD to compute the leverage scores for each task which can be achieved in a time complexity proportional to the product of the square of the number of features and the number of data samples. This is considerably less compared to GEM as shown in Table 1. The computational complexity can be further reduced with fast leverage score approximation methods like randomized algorithm in [5]. As shown in Figure 2, after training the whole sequence of tasks, both GEM and OLSS are able to preserve the accuracy for most tasks on rotated and permuted MNIST. Nevertheless, it is difficult to completely recover the accuracy of previously trained tasks on CIFAR100 for all algorithms. In case of synaptic consolidation based method like EWC, the loss function contains additional regularization or penalty terms for each previously trained tasks. These additional penalties are isolated from each other. As the number of tasks increases, it may loose the elasticity in consolidating the overlapping parameters, and as such show a steeper slope in the EWC plot of Figure 2. Conclusions We presented a new approach in addressing the continual learning problem with deep neural networks. It is inspired by the randomization and compression techniques typically used in statistical analysis. We combined a simple importance sampling technique -leverage score sampling with the frequent directions concept and developed an online effective forgetting or compression mechanism that preserves meaningful information from previous and current task, enabling continual learning across a sequence of tasks. Despite its simple structure, the results on classification benchmark experiments (designed for the catastrophic forgetting issue) demonstrate its effectiveness as compared to recent state of the art.
3,697
1908.00355
2966623897
In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for continual learning, effectively utilizing the previously acquired skills. As such, the key challenge is to transfer and generalize the knowledge learned from one task to other tasks, avoiding forgetting and interference of previous knowledge and improving the overall performance. In this paper, within the continual learning paradigm, we introduce a method that effectively forgets the less useful data samples continuously and allows beneficial information to be kept for training of the subsequent tasks, in an online manner. The method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a continual or life-long learning property. This effectively maintains a constant training size across all tasks. We first provide mathematical intuition for the method and then demonstrate its effectiveness in avoiding catastrophic forgetting and computational efficiency on continual learning of classification tasks when compared with the existing state-of-the-art techniques.
In order to demonstrate our idea in comparison with the state-of-the-art techniques, we briefly discuss the following three popular approaches to continual learning: I) : It constrains or regularizes the model parameters by adding additional terms in the loss function that prevent the model from deviating significantly from the parameters important to earlier tasks. Typical algorithms include elastic weight consolidation (EWC) @cite_23 and continual learning through synaptic intelligence (SI) @cite_2 . II) : It revises the model structure successively after each task in order to provide more memory and additional free parameters in the model for new task input. Recent examples in this direction are progressive neural networks @cite_4 and dynamically expanding networks @cite_10 . III) : It stores data samples from previous tasks in a separate memory buffer and retrains the new model based on both the new task input and the memory buffer. Popular algorithms here are gradient episodic memory (GEM) @cite_20 , incremental classifier and representation learning (iCaRL) @cite_5 .
{ "abstract": [ "Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index.", "Abstract The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.", "While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.", "A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.", "We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks. DEN is efficiently trained in an online manner by performing selective retraining, dynamically expands network capacity upon arrival of each task with only the necessary number of units, and effectively prevents semantic drift by splitting duplicating units and timestamping them. We validate DEN on multiple public datasets in lifelong learning scenarios on multiple public datasets, on which it not only significantly outperforms existing lifelong learning methods for deep networks, but also achieves the same level of performance as the batch model with substantially fewer number of parameters.", "One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art." ], "cite_N": [ "@cite_4", "@cite_23", "@cite_2", "@cite_5", "@cite_10", "@cite_20" ], "mid": [ "2426267443", "2560647685", "2737492962", "2964189064", "2963540014", "2962724315" ] }
Continual Learning via Online Leverage Score Sampling
It is a typical practice to design and optimize machine learning (ML) models to solve a single task. On the other hand, humans, instead of learning over isolated complex tasks, are capable of generalizing and transferring knowledge and skills learned from one task to another. This ability to remember, learn and transfer information across tasks is referred to as continual learning [36,31,12,27]. The major challenge for creating ML models with continual learning ability is that they are prone to catastrophic forgetting [22,23,11,7]. ML models tend to forget the knowledge learned from previous tasks when re-trained on new observations corresponding to a different (but related) task. Specifically when a deep neural network (DNN) is fed with a sequence of tasks, the ability to solve the first task will decline significantly after training on the following tasks. The typical structure of DNNs by design does not possess the capability of preserving previously learned knowledge without interference between tasks or catastrophic forgetting. In order to overcome catastrophic forgetting, a learning system is required to continuously acquire knowledge from the newly fed data as well as to prevent the training of the new data samples from destroying the existing knowledge. In this paper, we propose a novel approach to continual learning with DNNs that addresses the catastrophic forgetting issue, namely a technique called online leverage score sampling (OLSS). In OLSS, we progressively compress the input information learned thus far, along with the input from current task and form more efficiently condensed data samples. The compression technique is based on the statistical leverage scores measure, and it uses the concept of frequent directions in order to connect the series of compression steps for a sequence of tasks. When thinking about continual learning, a major source of inspiration is the ability of biological brains to learn without destructive interference between older memories and generalize knowledge across multiple tasks. In this regard, the typical approach is enabling some form of episodic-memory in the network and consolidation [22] via replay of older training data. However, this is an expensive process and does not scale well for learning large number of tasks. As an alternative, taking inspiration from the neuro-computational models of complex synapses [1], recent work has focused on assigning some form of importance to parameters in a DNN and perform task-specific synaptic consolidation [14,39]. Here, we take a very different view of continual learning and find inspiration in the brains ability for dimensionality reduction [26] to extract meaningful information from its environment and drive behavior. As such, we enable such progressive dimensionality reduction (in terms of number of samples) of previous task data combined with new task data in order to only preserve a good summary information (discarding the less relevant information or effective forgetting) before further learning. Repeating this process in an online manner we enable continual learning for a large sequence of tasks. Much like our brains, a central strategy employed by our method is to strike a balance between dimensionality reduction of task specific data and dimensionality expansion as processing progresses throughout the hierarchy of the neural network [8]. Online Leverage Score Sampling Before presenting the idea, we first setup the problem: Let {(A 1 , B 1 ), (A 2 , B 2 ), ..., (A i , B i ), ...} represent a sequence of tasks, each task consists of n i data samples and each sample has a feature dimension d and an output dimension m, i.e., input A i ∈ R ni×d and true output B i ∈ R ni×m . Here, we assume the feature and output dimensions are fixed for all tasks 1 . The goal is to train a DNN over the sequence of tasks and ensure it performs well on all of them, without catastrophic forgetting. Here, we consider that the network's architecture stays the same and the tasks are received in a sequential manner. Formally, with f representing a DNN, our objective is to minimize the loss 2 : min f f (A) − B 2 2 where A =       A 1 A 2 ... A i ...       and B =       B 1 B 2 ... B i ...       . ( Under this setup, we look at some of the existing models: Online EWC trains f on the ith task (A i , B i ) with a loss function containing additional penalty terms min f f (A i ) − B i 2 2 + i−1 j=1 w p=1 λF j p (θ p − θ j * p ) 2 , where λ indicates the importance level of the previous tasks compared to task i, F j p represents the pth diagonal entry of the Fisher information matrix for Task j, w represents the number of parameters in the network, θ p corresponds to the pth model parameter for the current task and θ j * p is the pth model parameter value for the jth task. Alternately, GEM maintains an extra memory buffer containing data samples from each of the previous tasks M k with k < i. It trains on the current task (A i , B i ) with a regular loss function, but subject to inequalities on each update of f (update on each parameter θ), min f f (A i ) − B i 2 2 s. t. ∂ f θ (A i ) − B i 2 2 ∂θ , ∂ f θ (A M k ) − B M k 2 2 ∂θ ≥ 0 for all k < i. Our approach The new method OLSS, different from either method above, targets to find an approximation of A in a streaming (online) manner, i.e., form a sketch i ∈ R ×d to approximate [A T 1 A T 2 · · · A T i ] T ∈ R (n1+...+ni)×d such that the resultinĝ f i := arg min f f ( i ) −B i 2 2 is likely to perform on all tasks as good as f * i := arg min f f ([A T 1 A T 2 · · · A T i ] T ) − [B T 1 B T 2 · · · B T i ] T 2 2 .(2) In order to avoid extra memory and computation cost during the training process, we could set the approximate i to have the same number of rows (number of data samples) as the current task A i . Equation (1) and (2) represent nonlinear least squares problems. It is to be noted that a nonlinear least squares problem can be solved with an approximation deduced from an iteration of linear least squares problems with J T J∆θ = J T ∆B where J is the Jacobian of f at each update (using the Gauss-Newton method). Besides this technique, there are various other approaches in addressing this problem. Here we adopt a cost effective simple randomization technique -leverage score sampling, which has been used extensively in solving large scale linear least squares and low rank approximation problems [2,5,37]. Statistical Leverage Score and Leverage Score Sampling Statistical leverage scores measure the non-uniformity structure of a matrix and a higher score indicates a heavier weight of the row contributing to the non-uniformity of the matrix. It has been widely used for outlier detection in statistical data analysis. In recent applications [5,37], it also emerges as a fundamental tool for constructing randomized matrix sketches. Given a matrix A ∈ R n×d , a sketch of A is another matrix B ∈ R ×d where is significantly smaller than n but still approximates A well, more specifically, Definition 1 [5] Given a matrix A ∈ R n×d with n > d, let U denote the n × d matrixA T A − B T B 2 ≤ ε A 2 2 . Theoretical accuracy guarantees have been derived for random sampling methods based on statistical leverage scores [37,20]. Considering our setup which is to approximate a matrix for solving a least squares problem and also the computational efficiency, we adopt the following leverage score based sampling method: Given a sketch size , define a distribution {p i , ..., p n } 3 with p i = U (i,:) 2 2 d , the sketch is formed by independently and randomly selecting rows of A without replacement, where the ith row is selected with probability p i . Based on this, we are able to select the samples that contributes the most to a given dataset. The remaining problem is to embed it in a sequence of tasks and still generate promising approximations to solve the least squares problem. In order to achieve that, we make use of the concept of frequent directions. Frequent Directions Frequent directions extends the idea of frequent items in item frequency approximation problem to a matrix [18,10,34] and it is also used to generate a sketch for a matrix, but in a data streaming environment. As the rows of A ∈ R n×d are fed in one by one, the original idea of frequent directions is to first perform Singular Value Decomposition (SVD) on the first 2 rows of A and shrink the top singular values by the same amount which is determined by the ( + 1)th singular value, and then save the product of the shrunken top singular values and the top right singular vectors as a sketch for the first 2 rows of A. With the next rows fed in, append them behind the sketch and perform the shrink and product. This process is repeated until reaching the final sketch ∈ R ×d for A ∈ R n×d . Different from the leverage score sampling sketching technique, a deterministic bound is guaranteed for the accuracy of the sketch: A T A − T 2 2 ≤ A − A k 2 F /( − k) with l > k and A k denotes best rank-k approximation of A [18,10]. Inspired by the routine of frequent directions in a streaming data environment, our OLSS method is constructed as follows: First initialize a 'sketch' matrix ∈ R ×d and a correspondingB ∈ R ×m . For the first task (A 1 ∈ R n1×d , B 1 ∈ R n1×m ), we randomly select rows of A and (the corresponding rows of) B without replacement according to the leverage score sampling defined above with probability distribution based on A's leverage scores, then train the model on the sketch (Â,B); after seeing Task 2, we append (A 2 , B 2 ) to the sketch (Â,B) respectively and again randomly select out of + n 2 data samples according to the leverage score sampling with the probability distribution based on the leverage scores of [ T , A T 2 ] T ∈ R ( +n2)×d , and form a new sketch ∈ R ×d andB ∈ R ×d , then train on it. This process is repeated until the end of the task sequence. We present the step by step procedure in Algorithm 1. 3 Since d = U 2 F = n i=1 U (i, Main Algorithm The original idea of leverage score sampling and frequent directions both have the theoretical accuracy bounds with the sketch on the error term A T A − T . The bounds show that the sketcĥ A contains the relevant information used to form the covariance matrix of all the data samples A T A, in other words, the sketch captures the relationship among the data samples in the feature space (which is of dimension d). For a sequence of tasks, it is common to have noisy data samples or interruptions among samples for different tasks. The continuous update of important rows in a matrix (data samples for a sequence of tasks), or the continuous effective forgetting of less useful rows may serve as a filter to remove the unwanted noise. Different from most existing methods, Algorithm 1 does not work directly with the training model, instead it could be considered as data pre-processing which constantly extracts useful information from previous and current tasks. Because of its parallel construction, OLSS could be combined with all the aforementioned algorithms to further improve its performance. 8: Randomly select rows of andB without replacement based on probability Uj,: 2 2 / U 2 F for j ∈ {1, ..., ni + } (or j ∈ {1, ..., ni} when i = 1) and set them as andB respectively. 9: Train the model with ∈ R ×d andB ∈ R ×m . Regarding the computational complexity, when n i is large, the SVD of ∈ R (ni+ )×d in Step 6 is computationally expensive which takes O((n i + )d 2 ) time. This procedure is for the computation of leverage scores which can be sped up significantly with various leverage score approximation techniques in the literature [5,2,29], such as through the randomized algorithm in [5], the leverage scores for could be approximated in O((n i + )d log(n i + )) time. However, one possible drawback of the above procedure is that the relationship represented in a covariance matrix is linear, so any underlying nonlinear connections among the data samples may not be fully captured in the sketch. Furthermore, the structure of the function f would also affect the information required to be kept in the sketch in order to perform well on solving the least squares problem in (2). As such, there may exist certain underlying dependency of a data sample's importance on the DNN model architecture. This remains a future research direction. Experiments We evaluate the performance of the proposed algorithm OLSS on three classification tasks used as benchmarks in related prior work. • Rotated MNIST [19]: a variant of the MNIST dataset of handwriten digits [16], the digits in each task are rotated by a fixed angle between 0 • to 180 • . The experiment is on 20 tasks and each task consists of 60, 000 training and 10, 000 testing samples. • Permutated MNIST [14]: a variant of the MNIST dataset [16], the digits in each task are transformed by a fixed permutation of pixels. The experiment is on 20 tasks and each task consists of 60, 000 training and 10, 000 testing samples. • Incremental CIFAR100 [28,39]: a variant of the CIFAR object recognition dataset with 100 classes [15]. The experiment is on 20 tasks and each task consists of 5 classes; each task consists of 2, 500 training and 500 testing samples. Where, each task introduces a new set of classes; for a total number of 20 tasks, each new task concerns examples from a disjoint subset of 5 classes. In the setting of [19] for incremental CIFAR100, a softmax layer is added to the output vector which only allows entries representing the 5 classes in the current task to output values larger than 0. In our setting, we allow the entries representing all the past occurring classes to output values larger than 0. We believe this is a more natural setup for continual learning. For the aforementioned experiments, we compare the performance of the following algorithms: • A simple SGD predictor. • EWC [14], as discussed earlier in Section 2. • GEM [19], as discussed earlier in Section 2. • iCaRL [28], it classifies based on a nearest-mean-of-exemplars rule, keeps an episodic memory and updates its exemplar set continuously to prevent catastrophic forgetting. It is only applicable to incremental CIFAR100 experiment due to its requirement on the same input representation across tasks. • OLSS (ours). In addition to these, experiments were also conducted using SI [39] and the same three tasks. However, no significant improvement in performance and a sensitivity to learning rate parameter choice was observed, with learning ability being relatively better than online EWC. As such we don't show SI performance in our plots. It can however be tested using our open sourced code 4 for this paper. The competing algorithms SGD, EWC, GEM and iCaRL were implemented based on the publicly available code from the original authors of the GEM paper [19]; a plain SGD optimizer is used for all algorithms. The DNN used for rotated and permuted MNIST is an MLP with 2 hidden layers and each with 400 rectified linear units; whereas a smaller version of ResNet18 [13], with three times less feature maps across all layers is used for the incremental CIFAR100 experiment. We train 5 epochs with batch size 200 on rotated and permuted MNIST datasets and 10 epochs with batch size 100 on incremental CIFAR100. The regularization and memory hyper-parameters in EWC, iCaRL and GEM were set as described in [19]. The space parameter for our OLSS algorithm was set to be equal to the number of samples in each task. The learning rate for each algorithm was determined through a grid Results To evaluate the performance of different algorithms, we examine • The average test accuracy (Figure 1 (left)), defined as 1 k k i=1 Acc(task i) after training x = k tasks. • Task 1's test accuracy (Figure 1 (right)), defined as Acc(task 1) after training x = k tasks. • Wall clock time ( Table 1). As observed from Figure 1 (left) across the three benchmarks, OLSS achieves similar average task accuracy or slightly higher compared to GEM and clearly outperforms SGD, EWC and iCaRL. This demonstrates the the ability of OLSS for continuously selecting useful data samples with progressive learning to overcome the catastrophic forgetting issue. In terms of maintaining the performance of the earliest task (Task 1) after training a sequence of tasks, OLSS shows the most robust performance at par with GEM on rotated and permutated MNIST, and slightly worse than GEM as the number of tasks increases in case of incremental CIFAR100. However, both these methods, significantly outperform SGD, EWC and iCaRL. In order to compare the computational time complexity across the methods, we report the walk clock time in Table 1. Noticeably, SGD is the fastest among all the algorithms, however performs the worst as observed in Figure 1, then followed by OLSS and EWC (only in the case of CIFAR100, EWC is relatively faster than OLSS). The algorithms iCaRL and GEM both demand much higher computational costs, with GEM being significantly slow compared to the rest. This behavior is expected due to the requirement of additional constraint validation and at certain occasions, a gradient projection step (in order to correct for constraint violations across data samples from previously learned tasks stored in the memory buffer) in GEM (see Section 3 in [19]). As such although the buffered replay-memory based approach in GEM prevents catastrophic forgetting, the computational cost becomes prohibitively slow to be performed online while training DNNs on sequential multi-task learning scenarios. Based on the performance and computational efficiency on all three datasets, OLSS emerges as the most favorable among the current state of the art algorithms for continual learning. Discussions The space parameter of OLSS ( in Algorithm 1) could be varied to balance its accuracy and efficiency. Here the choice of = n i (number of samples in current task) is selected such that the number of training samples would be standardized across all algorithms, enabling effective compression and extraction of data samples for OLSS in a straightforward comparison. However, it is to be noted that if = n i , OLSS indeed requires some additional memory in order to compute the SVD of concatenated sketch of previous tasks and the current task. Unless, the algorithm is run in an edge computing environment with limited memory on chip, this issue could be ignored. On the other hand, GEM and iCaRL keep an extra episodic memory throughout the training process. Memory size was set to be 256 for GEM and 1280 for iCaRL by considering the accuracy and efficiency in the experiments. Variations on the size of the episodic memory would also affect their performance as well as the running time. As described earlier, GEM requires a constraint validation step and a potential gradient projection step for every update of the model parameters. As such the computational time complexity in this case is proportional to the product of the number of samples kept in the episodic memory, the number of parameters in the model and the number of iterations required to converge. In contrast, OLSS uses a SVD to compute the leverage scores for each task which can be achieved in a time complexity proportional to the product of the square of the number of features and the number of data samples. This is considerably less compared to GEM as shown in Table 1. The computational complexity can be further reduced with fast leverage score approximation methods like randomized algorithm in [5]. As shown in Figure 2, after training the whole sequence of tasks, both GEM and OLSS are able to preserve the accuracy for most tasks on rotated and permuted MNIST. Nevertheless, it is difficult to completely recover the accuracy of previously trained tasks on CIFAR100 for all algorithms. In case of synaptic consolidation based method like EWC, the loss function contains additional regularization or penalty terms for each previously trained tasks. These additional penalties are isolated from each other. As the number of tasks increases, it may loose the elasticity in consolidating the overlapping parameters, and as such show a steeper slope in the EWC plot of Figure 2. Conclusions We presented a new approach in addressing the continual learning problem with deep neural networks. It is inspired by the randomization and compression techniques typically used in statistical analysis. We combined a simple importance sampling technique -leverage score sampling with the frequent directions concept and developed an online effective forgetting or compression mechanism that preserves meaningful information from previous and current task, enabling continual learning across a sequence of tasks. Despite its simple structure, the results on classification benchmark experiments (designed for the catastrophic forgetting issue) demonstrate its effectiveness as compared to recent state of the art.
3,697
1908.00222
2965084509
Recently, there has been growing interest in developing learning-based methods to detect and utilize salient semi-global or global structures, such as junctions, lines, planes, cuboids, smooth surfaces, and all types of symmetries, for 3D scene modeling and understanding. However, the ground truth annotations are often obtained via human labor, which is particularly challenging and inefficient for such tasks due to the large number of 3D structure instances (e.g., line segments) and other factors such as viewpoints and occlusions. In this paper, we present a new synthetic dataset, Structured3D, with the aim to providing large-scale photo-realistic images with rich 3D structure annotations for a wide spectrum of structured 3D modeling tasks. We take advantage of the availability of millions of professional interior designs and automatically extract 3D structures from them. We generate high-quality images with an industry-leading rendering engine. We use our synthetic dataset in combination with real images to train deep neural networks for room layout estimation and demonstrate improved performance on benchmark datasets.
@PARASPLIT Note that our dataset is very different from other popular large-scale 3D datasets, such as NYU v2 @cite_16 , SUN RGB-D @cite_34 , 2D-3D-S @cite_19 @cite_22 , ScanNet @cite_2 , and Matterport3D @cite_17 , in which the ground truth 3D information is stored in the format of point clouds or meshes. These datasets lack ground truth annotations of semi-global or global structures. While it is theoretically possible to extract 3D structure by applying structure detection algorithms to the point clouds or meshes ( , extracting planes from ScanNet as did in @cite_4 ), the detection results are often noisy and even contain errors. In addition, for some types of structure like wireframes and room layouts, how to reliably detect them from raw sensor data remains an active research topic in computer vision.
{ "abstract": [ "", "We present a dataset of large-scale indoor spaces that provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. The dataset covers over 6,000m2 and contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360° equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces. The dataset is available here: this http URL", "Although RGB-D sensors have enabled major break-throughs for several vision tasks, such as 3D reconstruction, we have not attained the same level of success in high-level scene understanding. Perhaps one of the main reasons is the lack of a large-scale benchmark with 3D annotations and 3D evaluation metrics. In this paper, we introduce an RGB-D benchmark suite for the goal of advancing the state-of-the-arts in all major scene understanding tasks. Our dataset is captured by four different sensors and contains 10,335 RGB-D images, at a similar scale as PASCAL VOC. The whole dataset is densely annotated and includes 146,617 2D polygons and 64,595 3D bounding boxes with accurate object orientations, as well as a 3D room layout and scene category for each image. This dataset enables us to train data-hungry algorithms for scene-understanding tasks, evaluate them using meaningful 3D metrics, avoid overfitting to a small testing set, and study cross-sensor bias.", "In this paper, we propose a method for semantic parsing the 3D point cloud of an entire building using a hierarchical approach: first, the raw data is parsed into semantically meaningful spaces (e.g. rooms, etc) that are aligned into a canonical reference coordinate system. Second, the spaces are parsed into their structural and building elements (e.g. walls, columns, etc). Performing these with a strong notation of global 3D space is the backbone of our method. The alignment in the first step injects strong 3D priors from the canonical coordinate system into the second step for discovering elements. This allows diverse challenging scenarios as man-made indoor spaces often show recurrent geometric patterns while the appearance features can change drastically. We also argue that identification of structural elements in indoor spaces is essentially a detection problem, rather than segmentation which is commonly used. We evaluated our method on a new dataset of several buildings with a covered area of over 6, 000m2 and over 215 million points, demonstrating robust results readily useful for practical applications.", "A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available &#x2013; current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval.", "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.", "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification." ], "cite_N": [ "@cite_4", "@cite_22", "@cite_34", "@cite_19", "@cite_2", "@cite_16", "@cite_17" ], "mid": [ "2905260191", "2586114507", "1923184257", "2460657278", "2594519801", "125693051", "2964339842" ] }
Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling
Inferring 3D information from 2D sensory data such as images and videos has long been a central research topic in computer vision. Conventional approach to build 3D models of a scene typically relies on detecting, matching, and triangulating local image features (e.g., patches, superpixels, edges, and SIFT features). Although significant progress has been made over the past decades, these methods still suffer from some fundamental problems. In particular, local feature detection is sensitive to a large number of factors such as scene appearance (e.g., textureless areas and repetitive patterns), lighting conditions, and occlusions. Further, the noisy, point cloud-based 3D model often fails to meet the increasing demand for high-level 3D understanding in real-world applications. When perceiving 3D scenes, humans are remarkably effective in using salient global structures such as lines, contours, planes, smooth surfaces, symmetries, and repetitive patterns. Thus, if a reconstruction algorithm can take advantage of such global information, it is natural to expect the algorithm to obtain more accurate results. Traditionally, however, it has been computationally challenging to reliably detect such global structures from noisy local image features. Recently, deep learning-based methods have shown promising results in detecting various forms of structure directly from the images, including lines [9], planes [15,28,12,30], cuboids [7], floorplans [14,13], room layouts [10,34,21], Table 1: An overview of structured 3D scene datasets. † : The actual numbers are not explicitly given and hard to estimate, because these datasets contain images downloaded from Internet (LSUN Room Layout, PanoContext), or from multiple sources (LayoutNet, Realtor360). * : Dataset is unavailable online at the time of submission. Datasets #Scenes #Rooms #Frames Annotated structure PlaneRCNN [12] --100,000 planes Wireframe [9] --5,462 wireframe (2D) SUN Primitive [27] --785 cuboids, other primitives LSUN Room Layout [33] n/a † 5,396 cuboid layout PanoContext [31] n/a † 500 (pano) cuboid layout LayoutNet [34] n/a † 1,071 (pano) cuboid layout Realtor360 * [29] n/a † 2,573 (pano) Manhattan layout Raster-to-Vector [14] 870 --floorplan Structured3D 3,500 21,835 196,515 "primitive + relationship" abstracted 3D shapes [22,25], and smooth surfaces [8]. With the fast development of deep learning methods comes the need for large amounts of accurately annotated data. In order to train the proposed neural networks, most prior work collects their own sets of images and manually label the structure of interest in them. Such a strategy has several shortcomings. First, due to the tedious process of manually labelling and verifying all the structure instances (e.g., line segments) in each image, existing datasets typically have limited sizes and scene diversity. And the annotations may also contain errors. Second, since each study primarily focuses on one type of structure, none of these datasets has multiple types of structure labeled. As a result, existing methods are unable to exploit relations between different types of structure (e.g., lines and planes) as humans do for effective, efficient, and robust 3D reconstruction. In this paper, we present a large synthetic dataset with rich annotations of 3D structure and photo-realistic 2D renderings of indoor man-made environments (Figure 1). At the core of our dataset design is a unified representation of 3D structure which enables us to efficiently capture multiple types of 3D structure in the scene. Specifically, the proposed representation considers any structure as relationship among geometric primitives. For example, a "wireframe" structure encodes the incidence and intersection relationship between line segments, whereas a "cuboid" structure encodes the rotational and reflective symmetry relationship among its planar faces. With our "primitive + relationship" representation, one can easily derive the ground truth annotations for a wide variety of semi-global and global structures (e.g., lines, wireframes, planes, regular shapes, floorplans, and room layouts), and also exploit their relations in future data-driven approaches (e.g., the wireframe formed by intersecting planar surfaces in the scene). To create a large-scale dataset with the aim to facilitate research on data-driven methods for structured 3D scene understanding, we leverage the availability of millions of professional interior designs and millions of production-level 3D object models -all coming with fine geometric details and high-resolution texture (Figure 1(a)). We first use computer programs to automatically extract information about 3D structure from the original house design files. As shown in Figure 1(b), our dataset contains rich annotations of 3D room structure including a variety of geometric primitives and relationships. To further generate photo-realistic 2D images (Figure 1(c)), we utilize industry-leading rendering engines to model the lighting conditions. Currently, our dataset consists of more than 196k images of 21,835 rooms in 3,500 scenes (i.e., houses). To showcase the usefulness and uniqueness of the proposed Structured3D dataset, we train deep networks for room layout estimation on a subset of the dataset. We show that the models first trained on our synthetic data and then fine-tuned on real data outperform the models trained on real data only. We also show good generalizability of the models trained on our synthetic data by directly applying them to real world images. In summary, the main contributions of this paper are: • We introduce a unified "primitive + relationship" representation for 3D structure. This representation enables us to efficiently capture a wide variety of semiglobal and global 3D structures, as well as their mutual relationships. • We create the Structured3D dataset, which contains rich ground truth 3D structure annotations of 21,835 rooms in 3,500 scenes, and more than 196k photorealistic 2D renderings of the rooms. • We verify the usefulness of our dataset by using it to train deep networks for room layout estimation and demonstrating improved performance on benchmark datasets. A Unified Representation of 3D Structure The main goal of our dataset is to provide rich annotations of ground truth 3D structure. A naive way to do so is generating and storing different types of 3D annotations in the same format as existing works, like wireframes as in [9], planes as in [12], floorplans as in [14], and so on. But this leads to a lot of redundancy. For example, planar surfaces in man-made environments are often bounded by a number of line segments, which are part of the wireframe. Even worse, by representing wireframes and planes separately, the relationships between them is also lost. In this paper, we present a unified representation of 3D structure in man-made environments, in order to minimize the redundancy in encoding multiple types of 3D structure, while preserving their mutual relationships. We show how most common types of structure previous studied in the literature (e.g., planes, cuboids, wireframes, room layouts, and floorplans) can be derived from our representation. We highlight a "room", a "balcony", and the "door" connecting them. Our representation of structure is largely inspired by the early work of Witkin and Tenenbaum [24], which characterizes structure as "a shape, pattern, or configuration that replicates or continues with little or no change over an interval of space and time". Accordingly, to describe any structure, we need to specify: (i) what pattern is continuing or replicating (e.g., a patch, an edge, or a texture descriptor), and (ii) the domain of its replication or continuation. In this paper, we call the former primitives and the latter relationships. The "Primitive + Relationship" Representation We now show how to describe a man-made environment using the "primitive + relationship" representation. For ease of exposition, we assume all objects in the scene can be modeled by piece-wise planar surfaces. But our representation can be easily extended to more general surfaces. An illustration of our representation is shown in Figure 3. Primitives Generally, a man-made environment consists of the following geometric primitives: • Planes P: We model the scene as a collection of planar surfaces P = {p 1 , p 2 , . . .} where each plane is described by its parameters p = {n, d}. • Lines L: When two planes intersect in the 3D space, a line is created. We use L = {l 1 , l 2 , . . .} to represent the set of all 3D lines in the scene. • Junction points X: When two lines meet in the 3D space, a junction point is formed. We use X = {x 1 , x 2 , . . .} to represent the set of all junction points. Relationships Next, we define some common types of relationships between the geometric primitives: • Plane-line relationships (R 1 ): We use a matrix W 1 to record all incidence and intersection relationships between planes in P and lines in L. Specifically, the ij-th entry of W 1 is 1 if l i is on p j , and 0 otherwise. Note that two planes are intersected at some line if and only if the corresponding entry in W T 1 W 1 is nonzero. • Line-point relationships (R 2 ): Similarly, we use a matrix W 2 to record all incidence and intersection relationships between lines in L and points in X. Specifically, the mn-th entry of W 2 is 1 if x m is on l n , and 0 otherwise. Note that two lines are intersected at some junction if and only if the corresponding entry in W T 2 W 2 is nonzero. • Cuboids (R 3 ): A cuboid is a special arrangement of plane primitives with rotational and reflection symmetry along x-, y-and z-axes. The corresponding symmetry group is the dihedral group D 2h . • Manhattan world (R 4 ): This is a special type of 3D structure commonly used for indoor and outdoor scene modeling. It can be viewed as a grouping relationship, in which all the plane primitives can be grouped into three classes, P 1 , P 2 , and P 3 , P = • Semantic objects (R 5 ): Semantic information is critical for many 3D computer vision tasks. It can be regarded as another type of grouping relationship, in which each semantic object instance corresponds to one or more primitives defined above. For example, each "wall", "ceiling", or "floor" instance is associated with one plane primitive; each "chair" instance is associated with a set of multiple plane primitives. Further, such a grouping is hierarchical. For example, we can further group one floor, one ceiling, and multiple walls to form a "living room" instance. And a "door" or a "window" is an opening which connects two rooms (or one room and the outer space). Note that the relationships are not mutually exclusive, in the sense that a primitive can belong to multiple relationship instances of same type or different types. For example, a plane primitive can be shared by two cuboids, and at the same time belong to one of the three classes in the Manhattan world model. Discussion The primitives and relationships we discussed above are just a few most common examples. They are by no means exhaustive. For example, our representation can be easily extended to included other primitives such as parametric surfaces. And besides cuboids, there are many other types of regular or symmetric shapes in man-made environments, where type corresponds to a different symmetry group. Relation to Existing Models Given our representation which contains primitives P = {P, L, X} and relationships R = {R 1 , R 2 , . . .}, we show how several types of 3D structure commonly studied in the literature can be derived from it. We again refer readers to Figure 2 for illustrations of these structures. Planes: A large volume of studies in the literature model the scene as a collection of 3D planes, where each plane is represented by its parameters and boundary. To generate such a model, we simply use the plane primitives P. For each p ∈ P, we further obtain its boundary by using matrix W 1 in R 1 to find all the lines in L that form an incidence relationship with p. Wireframes: A wireframe consists of lines L and junction points P, and their incidence and intersection relationships (R 2 ). Cuboids: This model is same as R 3 . Manhattan layouts: A Manhattan room layout model includes a "room" as defined in R 5 which also satisfies the Manhattan world assumption (R 4 ). Floorplans: A floorplan is a 2D vector representation which consists of a set of line segments and semantic labels (e.g., room types). To obtain such a vector representation, we can identify all lines in L and junction points in X which lie on a "floor" (as defined in R 5 ). To further obtain the semantic room labels, we can project all "rooms", "doors", and "windows" (as defined in R 5 ) to this floor. Abstracted 3D shapes: In addition to room structures, our representation can also be applied to individual 3D object models to create abstractions in the form of wireframes or cuboids, as described above. The Structured3D Dataset Our general, unified representation enables us to encodes a rich set of geometric primitives and relationships for structured 3D modeling. With this representation, our ultimate goal is to build a dataset which can be used to train machines to achieve the human-level understanding of the 3D environment. As a first step towards this goal, in this section, we describe our on-going effort to create a large-scale dataset of indoor scenes which include (i) ground truth 3D structure annotations of the scene and (ii) realistic 2D renderings of the scene. Note that in this work we focus on extracting ground truth annotations on the room structure only. We plan to extend our dataset to include 3D structure annotations of individual furniture models in the future. Extraction of Structured 3D Models To extract a "primitive + relationship" representation of the 3D scene, we make use of a large database of over one million house designs hand-crafted by professional designers. An example design is shown in Figure 4(a). All information of the design is stored in an industry-standard format in the database so that specifications about the geometry (e.g., the precise length, width, and height of each wall), textures and materials, and functions (e.g., which room the wall belongs to) of all objects can be easily retrieved. From the database, we have selected 3,500 house designs with about 21,854 rooms. We created a computer program to automatically extract all the geometric primitives associated with the room structure, which consists of the ceiling, The 3D models in SUNCG dataset [20] are created using Planner 5D [1], an online tool for amateur interior design. floor, walls, and openings (doors and windows). Given the precise measurements and associated information of these entities in the database, it is straightforward to generate all planes, lines, and junctions, as well as their relationships (R 1 and R 2 ). Since the measurements are highly accurate and noisefree, other types of relationship such a Manhattan world (R 3 ) and cuboids (R 4 ) can also be easily obtained by clustering the primitives, followed by a geometric verification process. Finally, to include semantic information (R 5 ) into our representation, we simply map the relevant labels provided by the professional designers to the geometric primitives in our representation. Figure 3 shows examples of the extracted geometric primitives and relationships. Photo-realistic 2D Rendering We have developed a photo-realistic renderer on top of Embree [23], an open-source collection of ray-tracing kernels for x86 CPUs. Our renderer uses the well-known path tracing [17] method, a Monte Carlo approach to approximating realistic Global Illumination (GI) for rendering. Each room is manually created by professional designers with over one million CAD models of furniture from world-leading manufacturers. These high-resolution furniture models are measured in real-world dimensions and being used in real production. A default lighting setup is also provided for each room. Figure 4 compares the 3D models in our database with those in the public SUNCG dataset [20], which are created using Planner 5D [1], an online tool for amateur interior design. At the time of rendering, a panorama or pin-hole camera is placed at random locations not occupied by objects in the room. We use 1024 × 512 resolution for panoramas and 640 × 480 for perspective images. Figure 5 shows example panoramas rendered by our engine. For each room, we generate a few different configurations (full, simple, and empty) by removing some or all the furniture. We also modify the lighting setup to generate images with different tem- Figure 6: Photo-realistic rendering vs. real-world decoration. We encourage readers to guess which column corresponds to real-world decoration. The answer is in the footnote 1 . perature. Besides images, our dataset also includes the corresponding depth maps and semantic labels, as they may be useful either as inputs to machine learning algorithms or to help multi-task learning. Figure 6 further illustrates the degree of photo-realism of our dataset, where we compare the rendered images with photos of real decoration guided by the design. We would like to emphasize the potential of our dataset in terms of extension capabilities. As we mentioned before, the unified representation enables us to include many other types of structure in the dataset. As for 2D rendering, depending on the application, we can easily simulate different effects such as lighting conditions, fisheye and novel camera designs, motion blur, and imaging noise. Furthermore, the dataset may be extended to include videos for applications like floorplan reconstruction [13] and visual SLAM [4]. Experiments To demonstrate the benefits of our new dataset, we use it to train deep neural networks for room layout estimation, an important task in structured 3D modeling. Experiment Setup Real dataset. We use the same dataset as LayoutNet [34]. The dataset consists of images from PanoContext [31] and 2D-3D-S [2], including 818 training images, 79 validation images, and 166 test images. Note that both datasets only provide cuboid-shape layout annotations. Our Structured3D dataset. In this experiment, we use a subset of panoramas with the original lighting and configuration. Each panorama corresponds to a different room in our dataset. We show statistics of different room layouts in 1 Right: real-world decoration. Table 2. Since current real dataset only contains cuboid-shape layout annotations (i.e., 4 corners), we choose 12k panoramic images with the cuboid-shape layout in our dataset. We split the images into 10k for training, 1k for validation, and 1k for testing. Evaluation metrics. Following [34,21], we adopt three standard metrics: i) 3D IoU: intersection over union between predicted 3D layout and the ground truth, ii) Corner Error (CE): Normalized 2 distance between predicted corner and ground truth, and iii) Pixel Error (PE): pixel-wise error between predicted plane classes and ground truth. Baselines. We choose two recent CNN-based approaches, LayoutNet [34] 2 and HorizonNet [21] 3 , based on their performance and source code availability. LayoutNet uses a CNN to predict a corner probability map and a boundary map from the panorama and vanishing lines, then optimizes the layout parameters based on network predictions. Hori-zonNet represents room layout as three 1D vectors, i.e., boundary positions of floor-wall, and ceiling wall, and existence of wall-wall boundary. It trains CNNs to directly predict the three 1D vectors. In this paper, we follow the default training setting of the respective methods and stop the training once the loss converges on the validation set. Experiment Results We have conduct several sets of experiments to measure the usefulness of our synthetic dataset. Impact of synthetic data. In this experiment, we train Lay-outNet and HorizonNet in three different manners: i) training only on our synthetic dataset ("s"), ii) training only on the real dataset ("r"), and iii) pre-training on our synthetic dataset, then fine-tuning on the real dataset ("s → r"). We adopt the training set of LayoutNet as the real dataset in this experiment. The results are shown in Table 3, in which we also include the numbers reported in the original papers ("r * "). As one can see, the use of synthetic data for pretraining can boost the performance of both networks. We refer readers to supplementary materials for more qualitative results. Performance vs. synthetic data size. We further study the relationship between the number of synthetic images used in pre-training and the accuracy on the real dataset. We sample 2.5k, 5k and 10k synthetic images for pre-training, then fine-tune the model on the real dataset. The results are shown in Table 4. As expected, using more synthetic data generally improves the performance. Table 4: Quantitative evaluation using varying synthetic data size in pre-training. The best and the second best results are boldfaced and underlined, respectively. Generalization to different domains. To compare the generalizability of the models trained on the synthetic dataset and the real dataset, we conduct experiments in two different configurations: i) training on our synthetic data, and ii) training on one real dataset. Then we test both models on the other real dataset. Note that the data used in LayoutNet is from two domains, i.e. PanoContext (PC) and 2D-3D-S. In this experiment, we use the two datasets separately. As shown in Table 5, when tested on PanoContext, the model trained on our data significantly outperforms the one trained on 2D-3D-S. When tested on 2D-3D-S, the model trained on our data is competitive with or slightly better than the one trained on PanoContext. Note that our dataset and PanoContext both focus on residential scenes, whereas images in 2D-3D-S are taken from office areas. Limitation of real datasets. Due to human errors, the annotation in real datasets is not always consistent with the actual room layout. In the left image of Figure 7, the room is a non-cuboid shape layout, but the ground truth layout is labeled as cuboid-shape. In the right image, the front wall is not labeled as ground truth. These examples illustrate the limitation of using real datasets as benchmarks. We avoid such errors in our dataset by automatically generating ground truth from the original design files. Methods Synthetic PanoContext 2D-3D-S Data Size 3D IoU (%) ↑ CE (%) ↓ PE (%) ↓ 3D IoU (%) ↑ CE (%) ↓ PE (%) ↓ Conclusion In this paper, we present Structured3D, a large synthetic dataset with rich ground truth 3D structure annotations and photo-realistic 2D renderings. We view this work as an important and exciting step towards building intelligent machines which can achieve human-level holistic 3D scene understanding: The unified "primitive+relationship" representation enables us to efficiently capture a wide variety of 3D structures and their relations, whereas the availability of millions of professional interior designs makes it possible to generate virtually unlimited amount of photo-realistic images and videos. In the future, we will continue to add more 3D structure annotations of the scenes and objects to the dataset, and explore novel ways to use the dataset to advance techniques for structured 3D modeling and understanding.
3,766
1908.00222
2965084509
Recently, there has been growing interest in developing learning-based methods to detect and utilize salient semi-global or global structures, such as junctions, lines, planes, cuboids, smooth surfaces, and all types of symmetries, for 3D scene modeling and understanding. However, the ground truth annotations are often obtained via human labor, which is particularly challenging and inefficient for such tasks due to the large number of 3D structure instances (e.g., line segments) and other factors such as viewpoints and occlusions. In this paper, we present a new synthetic dataset, Structured3D, with the aim to providing large-scale photo-realistic images with rich 3D structure annotations for a wide spectrum of structured 3D modeling tasks. We take advantage of the availability of millions of professional interior designs and automatically extract 3D structures from them. We generate high-quality images with an industry-leading rendering engine. We use our synthetic dataset in combination with real images to train deep neural networks for room layout estimation and demonstrate improved performance on benchmark datasets.
In recent years, synthetic datasets have played an important role in successful training of deep neural networks. Notable examples for indoor scene understanding include SUNCG @cite_10 , SceneNet RGB-D @cite_21 , and InteriorNet @cite_23 . These datasets exceed real datasets in terms of scene diversity and frame numbers. But just like their real counterparts, these datasets lack ground truth structure annotations. Another issue with some synthetic datasets is the degree of realism in both the 3D models and the 2D renderings. @cite_8 shows that physically-based rendering could boost the performance of various indoor scene understanding tasks. To ensure the quality of our dataset, we make use of 3D room models created by professional designers and the state-of-the-art industrial rendering engines in this work.
{ "abstract": [ "We introduce SceneNet RGB-D, a dataset providing pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection. It also provides perfect camera poses and depth data, allowing investigation into geometric computer vision problems such as optical flow, camera pose estimation, and 3D scene labelling tasks. Random sampling permits virtually unlimited scene configurations, and here we provide 5M rendered RGB-D images from 16K randomly generated 3D trajectories in synthetic layouts, with random but physically simulated object configurations. We compare the semantic segmentation performance of network weights produced from pretraining on RGB images from our dataset against generic VGG-16 ImageNet weights. After fine-tuning on the SUN RGB-D and NYUv2 real-world datasets we find in both cases that the synthetically pre-trained network outperforms the VGG-16 weights. When synthetic pre-training includes a depth channel (something ImageNet cannot natively provide) the performance is greater still. This suggests that large-scale high-quality synthetic RGB datasets with task-specific labels can be more useful for pretraining than real-world generic pre-training such as ImageNet. We host the dataset at http: robotvault. bitbucket.io scenenet-rgbd.html.", "This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http: sscnet.cs.princeton.edu.", "", "Indoor scene understanding is central to applications such as robot navigation and human companion assistance. Over the last years, data-driven deep neural networks have outperformed many traditional approaches thanks to their representation learning capabilities. One of the bottlenecks in training for better representations is the amount of available per-pixel ground truth data that is required for core scene understanding tasks such as semantic segmentation, normal prediction, and object boundary detection. To address this problem, a number of works proposed using synthetic data. However, a systematic study of how such synthetic data is generated is missing. In this work, we introduce a large-scale synthetic dataset with 500K physically-based rendered images from 45K realistic 3D indoor scenes. We study the effects of rendering methods and scene lighting on training for three computer vision tasks: surface normal prediction, semantic segmentation, and object boundary detection. This study provides insights into the best practices for training with synthetic data (more realistic rendering is worth it) and shows that pretraining with our new synthetic dataset can improve results beyond the current state of the art on all three tasks." ], "cite_N": [ "@cite_21", "@cite_10", "@cite_23", "@cite_8" ], "mid": [ "2780351918", "2557465155", "2963225136", "2563100679" ] }
Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling
Inferring 3D information from 2D sensory data such as images and videos has long been a central research topic in computer vision. Conventional approach to build 3D models of a scene typically relies on detecting, matching, and triangulating local image features (e.g., patches, superpixels, edges, and SIFT features). Although significant progress has been made over the past decades, these methods still suffer from some fundamental problems. In particular, local feature detection is sensitive to a large number of factors such as scene appearance (e.g., textureless areas and repetitive patterns), lighting conditions, and occlusions. Further, the noisy, point cloud-based 3D model often fails to meet the increasing demand for high-level 3D understanding in real-world applications. When perceiving 3D scenes, humans are remarkably effective in using salient global structures such as lines, contours, planes, smooth surfaces, symmetries, and repetitive patterns. Thus, if a reconstruction algorithm can take advantage of such global information, it is natural to expect the algorithm to obtain more accurate results. Traditionally, however, it has been computationally challenging to reliably detect such global structures from noisy local image features. Recently, deep learning-based methods have shown promising results in detecting various forms of structure directly from the images, including lines [9], planes [15,28,12,30], cuboids [7], floorplans [14,13], room layouts [10,34,21], Table 1: An overview of structured 3D scene datasets. † : The actual numbers are not explicitly given and hard to estimate, because these datasets contain images downloaded from Internet (LSUN Room Layout, PanoContext), or from multiple sources (LayoutNet, Realtor360). * : Dataset is unavailable online at the time of submission. Datasets #Scenes #Rooms #Frames Annotated structure PlaneRCNN [12] --100,000 planes Wireframe [9] --5,462 wireframe (2D) SUN Primitive [27] --785 cuboids, other primitives LSUN Room Layout [33] n/a † 5,396 cuboid layout PanoContext [31] n/a † 500 (pano) cuboid layout LayoutNet [34] n/a † 1,071 (pano) cuboid layout Realtor360 * [29] n/a † 2,573 (pano) Manhattan layout Raster-to-Vector [14] 870 --floorplan Structured3D 3,500 21,835 196,515 "primitive + relationship" abstracted 3D shapes [22,25], and smooth surfaces [8]. With the fast development of deep learning methods comes the need for large amounts of accurately annotated data. In order to train the proposed neural networks, most prior work collects their own sets of images and manually label the structure of interest in them. Such a strategy has several shortcomings. First, due to the tedious process of manually labelling and verifying all the structure instances (e.g., line segments) in each image, existing datasets typically have limited sizes and scene diversity. And the annotations may also contain errors. Second, since each study primarily focuses on one type of structure, none of these datasets has multiple types of structure labeled. As a result, existing methods are unable to exploit relations between different types of structure (e.g., lines and planes) as humans do for effective, efficient, and robust 3D reconstruction. In this paper, we present a large synthetic dataset with rich annotations of 3D structure and photo-realistic 2D renderings of indoor man-made environments (Figure 1). At the core of our dataset design is a unified representation of 3D structure which enables us to efficiently capture multiple types of 3D structure in the scene. Specifically, the proposed representation considers any structure as relationship among geometric primitives. For example, a "wireframe" structure encodes the incidence and intersection relationship between line segments, whereas a "cuboid" structure encodes the rotational and reflective symmetry relationship among its planar faces. With our "primitive + relationship" representation, one can easily derive the ground truth annotations for a wide variety of semi-global and global structures (e.g., lines, wireframes, planes, regular shapes, floorplans, and room layouts), and also exploit their relations in future data-driven approaches (e.g., the wireframe formed by intersecting planar surfaces in the scene). To create a large-scale dataset with the aim to facilitate research on data-driven methods for structured 3D scene understanding, we leverage the availability of millions of professional interior designs and millions of production-level 3D object models -all coming with fine geometric details and high-resolution texture (Figure 1(a)). We first use computer programs to automatically extract information about 3D structure from the original house design files. As shown in Figure 1(b), our dataset contains rich annotations of 3D room structure including a variety of geometric primitives and relationships. To further generate photo-realistic 2D images (Figure 1(c)), we utilize industry-leading rendering engines to model the lighting conditions. Currently, our dataset consists of more than 196k images of 21,835 rooms in 3,500 scenes (i.e., houses). To showcase the usefulness and uniqueness of the proposed Structured3D dataset, we train deep networks for room layout estimation on a subset of the dataset. We show that the models first trained on our synthetic data and then fine-tuned on real data outperform the models trained on real data only. We also show good generalizability of the models trained on our synthetic data by directly applying them to real world images. In summary, the main contributions of this paper are: • We introduce a unified "primitive + relationship" representation for 3D structure. This representation enables us to efficiently capture a wide variety of semiglobal and global 3D structures, as well as their mutual relationships. • We create the Structured3D dataset, which contains rich ground truth 3D structure annotations of 21,835 rooms in 3,500 scenes, and more than 196k photorealistic 2D renderings of the rooms. • We verify the usefulness of our dataset by using it to train deep networks for room layout estimation and demonstrating improved performance on benchmark datasets. A Unified Representation of 3D Structure The main goal of our dataset is to provide rich annotations of ground truth 3D structure. A naive way to do so is generating and storing different types of 3D annotations in the same format as existing works, like wireframes as in [9], planes as in [12], floorplans as in [14], and so on. But this leads to a lot of redundancy. For example, planar surfaces in man-made environments are often bounded by a number of line segments, which are part of the wireframe. Even worse, by representing wireframes and planes separately, the relationships between them is also lost. In this paper, we present a unified representation of 3D structure in man-made environments, in order to minimize the redundancy in encoding multiple types of 3D structure, while preserving their mutual relationships. We show how most common types of structure previous studied in the literature (e.g., planes, cuboids, wireframes, room layouts, and floorplans) can be derived from our representation. We highlight a "room", a "balcony", and the "door" connecting them. Our representation of structure is largely inspired by the early work of Witkin and Tenenbaum [24], which characterizes structure as "a shape, pattern, or configuration that replicates or continues with little or no change over an interval of space and time". Accordingly, to describe any structure, we need to specify: (i) what pattern is continuing or replicating (e.g., a patch, an edge, or a texture descriptor), and (ii) the domain of its replication or continuation. In this paper, we call the former primitives and the latter relationships. The "Primitive + Relationship" Representation We now show how to describe a man-made environment using the "primitive + relationship" representation. For ease of exposition, we assume all objects in the scene can be modeled by piece-wise planar surfaces. But our representation can be easily extended to more general surfaces. An illustration of our representation is shown in Figure 3. Primitives Generally, a man-made environment consists of the following geometric primitives: • Planes P: We model the scene as a collection of planar surfaces P = {p 1 , p 2 , . . .} where each plane is described by its parameters p = {n, d}. • Lines L: When two planes intersect in the 3D space, a line is created. We use L = {l 1 , l 2 , . . .} to represent the set of all 3D lines in the scene. • Junction points X: When two lines meet in the 3D space, a junction point is formed. We use X = {x 1 , x 2 , . . .} to represent the set of all junction points. Relationships Next, we define some common types of relationships between the geometric primitives: • Plane-line relationships (R 1 ): We use a matrix W 1 to record all incidence and intersection relationships between planes in P and lines in L. Specifically, the ij-th entry of W 1 is 1 if l i is on p j , and 0 otherwise. Note that two planes are intersected at some line if and only if the corresponding entry in W T 1 W 1 is nonzero. • Line-point relationships (R 2 ): Similarly, we use a matrix W 2 to record all incidence and intersection relationships between lines in L and points in X. Specifically, the mn-th entry of W 2 is 1 if x m is on l n , and 0 otherwise. Note that two lines are intersected at some junction if and only if the corresponding entry in W T 2 W 2 is nonzero. • Cuboids (R 3 ): A cuboid is a special arrangement of plane primitives with rotational and reflection symmetry along x-, y-and z-axes. The corresponding symmetry group is the dihedral group D 2h . • Manhattan world (R 4 ): This is a special type of 3D structure commonly used for indoor and outdoor scene modeling. It can be viewed as a grouping relationship, in which all the plane primitives can be grouped into three classes, P 1 , P 2 , and P 3 , P = • Semantic objects (R 5 ): Semantic information is critical for many 3D computer vision tasks. It can be regarded as another type of grouping relationship, in which each semantic object instance corresponds to one or more primitives defined above. For example, each "wall", "ceiling", or "floor" instance is associated with one plane primitive; each "chair" instance is associated with a set of multiple plane primitives. Further, such a grouping is hierarchical. For example, we can further group one floor, one ceiling, and multiple walls to form a "living room" instance. And a "door" or a "window" is an opening which connects two rooms (or one room and the outer space). Note that the relationships are not mutually exclusive, in the sense that a primitive can belong to multiple relationship instances of same type or different types. For example, a plane primitive can be shared by two cuboids, and at the same time belong to one of the three classes in the Manhattan world model. Discussion The primitives and relationships we discussed above are just a few most common examples. They are by no means exhaustive. For example, our representation can be easily extended to included other primitives such as parametric surfaces. And besides cuboids, there are many other types of regular or symmetric shapes in man-made environments, where type corresponds to a different symmetry group. Relation to Existing Models Given our representation which contains primitives P = {P, L, X} and relationships R = {R 1 , R 2 , . . .}, we show how several types of 3D structure commonly studied in the literature can be derived from it. We again refer readers to Figure 2 for illustrations of these structures. Planes: A large volume of studies in the literature model the scene as a collection of 3D planes, where each plane is represented by its parameters and boundary. To generate such a model, we simply use the plane primitives P. For each p ∈ P, we further obtain its boundary by using matrix W 1 in R 1 to find all the lines in L that form an incidence relationship with p. Wireframes: A wireframe consists of lines L and junction points P, and their incidence and intersection relationships (R 2 ). Cuboids: This model is same as R 3 . Manhattan layouts: A Manhattan room layout model includes a "room" as defined in R 5 which also satisfies the Manhattan world assumption (R 4 ). Floorplans: A floorplan is a 2D vector representation which consists of a set of line segments and semantic labels (e.g., room types). To obtain such a vector representation, we can identify all lines in L and junction points in X which lie on a "floor" (as defined in R 5 ). To further obtain the semantic room labels, we can project all "rooms", "doors", and "windows" (as defined in R 5 ) to this floor. Abstracted 3D shapes: In addition to room structures, our representation can also be applied to individual 3D object models to create abstractions in the form of wireframes or cuboids, as described above. The Structured3D Dataset Our general, unified representation enables us to encodes a rich set of geometric primitives and relationships for structured 3D modeling. With this representation, our ultimate goal is to build a dataset which can be used to train machines to achieve the human-level understanding of the 3D environment. As a first step towards this goal, in this section, we describe our on-going effort to create a large-scale dataset of indoor scenes which include (i) ground truth 3D structure annotations of the scene and (ii) realistic 2D renderings of the scene. Note that in this work we focus on extracting ground truth annotations on the room structure only. We plan to extend our dataset to include 3D structure annotations of individual furniture models in the future. Extraction of Structured 3D Models To extract a "primitive + relationship" representation of the 3D scene, we make use of a large database of over one million house designs hand-crafted by professional designers. An example design is shown in Figure 4(a). All information of the design is stored in an industry-standard format in the database so that specifications about the geometry (e.g., the precise length, width, and height of each wall), textures and materials, and functions (e.g., which room the wall belongs to) of all objects can be easily retrieved. From the database, we have selected 3,500 house designs with about 21,854 rooms. We created a computer program to automatically extract all the geometric primitives associated with the room structure, which consists of the ceiling, The 3D models in SUNCG dataset [20] are created using Planner 5D [1], an online tool for amateur interior design. floor, walls, and openings (doors and windows). Given the precise measurements and associated information of these entities in the database, it is straightforward to generate all planes, lines, and junctions, as well as their relationships (R 1 and R 2 ). Since the measurements are highly accurate and noisefree, other types of relationship such a Manhattan world (R 3 ) and cuboids (R 4 ) can also be easily obtained by clustering the primitives, followed by a geometric verification process. Finally, to include semantic information (R 5 ) into our representation, we simply map the relevant labels provided by the professional designers to the geometric primitives in our representation. Figure 3 shows examples of the extracted geometric primitives and relationships. Photo-realistic 2D Rendering We have developed a photo-realistic renderer on top of Embree [23], an open-source collection of ray-tracing kernels for x86 CPUs. Our renderer uses the well-known path tracing [17] method, a Monte Carlo approach to approximating realistic Global Illumination (GI) for rendering. Each room is manually created by professional designers with over one million CAD models of furniture from world-leading manufacturers. These high-resolution furniture models are measured in real-world dimensions and being used in real production. A default lighting setup is also provided for each room. Figure 4 compares the 3D models in our database with those in the public SUNCG dataset [20], which are created using Planner 5D [1], an online tool for amateur interior design. At the time of rendering, a panorama or pin-hole camera is placed at random locations not occupied by objects in the room. We use 1024 × 512 resolution for panoramas and 640 × 480 for perspective images. Figure 5 shows example panoramas rendered by our engine. For each room, we generate a few different configurations (full, simple, and empty) by removing some or all the furniture. We also modify the lighting setup to generate images with different tem- Figure 6: Photo-realistic rendering vs. real-world decoration. We encourage readers to guess which column corresponds to real-world decoration. The answer is in the footnote 1 . perature. Besides images, our dataset also includes the corresponding depth maps and semantic labels, as they may be useful either as inputs to machine learning algorithms or to help multi-task learning. Figure 6 further illustrates the degree of photo-realism of our dataset, where we compare the rendered images with photos of real decoration guided by the design. We would like to emphasize the potential of our dataset in terms of extension capabilities. As we mentioned before, the unified representation enables us to include many other types of structure in the dataset. As for 2D rendering, depending on the application, we can easily simulate different effects such as lighting conditions, fisheye and novel camera designs, motion blur, and imaging noise. Furthermore, the dataset may be extended to include videos for applications like floorplan reconstruction [13] and visual SLAM [4]. Experiments To demonstrate the benefits of our new dataset, we use it to train deep neural networks for room layout estimation, an important task in structured 3D modeling. Experiment Setup Real dataset. We use the same dataset as LayoutNet [34]. The dataset consists of images from PanoContext [31] and 2D-3D-S [2], including 818 training images, 79 validation images, and 166 test images. Note that both datasets only provide cuboid-shape layout annotations. Our Structured3D dataset. In this experiment, we use a subset of panoramas with the original lighting and configuration. Each panorama corresponds to a different room in our dataset. We show statistics of different room layouts in 1 Right: real-world decoration. Table 2. Since current real dataset only contains cuboid-shape layout annotations (i.e., 4 corners), we choose 12k panoramic images with the cuboid-shape layout in our dataset. We split the images into 10k for training, 1k for validation, and 1k for testing. Evaluation metrics. Following [34,21], we adopt three standard metrics: i) 3D IoU: intersection over union between predicted 3D layout and the ground truth, ii) Corner Error (CE): Normalized 2 distance between predicted corner and ground truth, and iii) Pixel Error (PE): pixel-wise error between predicted plane classes and ground truth. Baselines. We choose two recent CNN-based approaches, LayoutNet [34] 2 and HorizonNet [21] 3 , based on their performance and source code availability. LayoutNet uses a CNN to predict a corner probability map and a boundary map from the panorama and vanishing lines, then optimizes the layout parameters based on network predictions. Hori-zonNet represents room layout as three 1D vectors, i.e., boundary positions of floor-wall, and ceiling wall, and existence of wall-wall boundary. It trains CNNs to directly predict the three 1D vectors. In this paper, we follow the default training setting of the respective methods and stop the training once the loss converges on the validation set. Experiment Results We have conduct several sets of experiments to measure the usefulness of our synthetic dataset. Impact of synthetic data. In this experiment, we train Lay-outNet and HorizonNet in three different manners: i) training only on our synthetic dataset ("s"), ii) training only on the real dataset ("r"), and iii) pre-training on our synthetic dataset, then fine-tuning on the real dataset ("s → r"). We adopt the training set of LayoutNet as the real dataset in this experiment. The results are shown in Table 3, in which we also include the numbers reported in the original papers ("r * "). As one can see, the use of synthetic data for pretraining can boost the performance of both networks. We refer readers to supplementary materials for more qualitative results. Performance vs. synthetic data size. We further study the relationship between the number of synthetic images used in pre-training and the accuracy on the real dataset. We sample 2.5k, 5k and 10k synthetic images for pre-training, then fine-tune the model on the real dataset. The results are shown in Table 4. As expected, using more synthetic data generally improves the performance. Table 4: Quantitative evaluation using varying synthetic data size in pre-training. The best and the second best results are boldfaced and underlined, respectively. Generalization to different domains. To compare the generalizability of the models trained on the synthetic dataset and the real dataset, we conduct experiments in two different configurations: i) training on our synthetic data, and ii) training on one real dataset. Then we test both models on the other real dataset. Note that the data used in LayoutNet is from two domains, i.e. PanoContext (PC) and 2D-3D-S. In this experiment, we use the two datasets separately. As shown in Table 5, when tested on PanoContext, the model trained on our data significantly outperforms the one trained on 2D-3D-S. When tested on 2D-3D-S, the model trained on our data is competitive with or slightly better than the one trained on PanoContext. Note that our dataset and PanoContext both focus on residential scenes, whereas images in 2D-3D-S are taken from office areas. Limitation of real datasets. Due to human errors, the annotation in real datasets is not always consistent with the actual room layout. In the left image of Figure 7, the room is a non-cuboid shape layout, but the ground truth layout is labeled as cuboid-shape. In the right image, the front wall is not labeled as ground truth. These examples illustrate the limitation of using real datasets as benchmarks. We avoid such errors in our dataset by automatically generating ground truth from the original design files. Methods Synthetic PanoContext 2D-3D-S Data Size 3D IoU (%) ↑ CE (%) ↓ PE (%) ↓ 3D IoU (%) ↑ CE (%) ↓ PE (%) ↓ Conclusion In this paper, we present Structured3D, a large synthetic dataset with rich ground truth 3D structure annotations and photo-realistic 2D renderings. We view this work as an important and exciting step towards building intelligent machines which can achieve human-level holistic 3D scene understanding: The unified "primitive+relationship" representation enables us to efficiently capture a wide variety of 3D structures and their relations, whereas the availability of millions of professional interior designs makes it possible to generate virtually unlimited amount of photo-realistic images and videos. In the future, we will continue to add more 3D structure annotations of the scenes and objects to the dataset, and explore novel ways to use the dataset to advance techniques for structured 3D modeling and understanding.
3,766
1908.00222
2965084509
Recently, there has been growing interest in developing learning-based methods to detect and utilize salient semi-global or global structures, such as junctions, lines, planes, cuboids, smooth surfaces, and all types of symmetries, for 3D scene modeling and understanding. However, the ground truth annotations are often obtained via human labor, which is particularly challenging and inefficient for such tasks due to the large number of 3D structure instances (e.g., line segments) and other factors such as viewpoints and occlusions. In this paper, we present a new synthetic dataset, Structured3D, with the aim to providing large-scale photo-realistic images with rich 3D structure annotations for a wide spectrum of structured 3D modeling tasks. We take advantage of the availability of millions of professional interior designs and automatically extract 3D structures from them. We generate high-quality images with an industry-leading rendering engine. We use our synthetic dataset in combination with real images to train deep neural networks for room layout estimation and demonstrate improved performance on benchmark datasets.
Room layout estimation. Room layout estimation aims to reconstruct the enclosing structure of the indoor scene, consisting of walls, floor, and ceiling. Existing public datasets ( , PanoContext @cite_3 and LayoutNet @cite_0 ) assume a simple cuboid-shape layout. PanoContext @cite_3 collects about 500 panoramas from the SUN360 dataset @cite_11 , LayoutNet @cite_0 extends the layout annotations to include panoramas from 2D-3D-S @cite_22 . Recently, Realtor360 @cite_32 collects 2,500 indoor panoramas from SUN360 @cite_11 and a real-estate database, and provides annotation of a more general Manhattan layout. We note that all room layout in these real datasets is manually labeled by the human. Since the room structure may be occluded by furniture and other objects, the ground truth'' inferred by humans may be not consistent with the actual layout. In our dataset, all ground truth 3D annotations are automatically extracted from the original house design files.
{ "abstract": [ "We present a dataset of large-scale indoor spaces that provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. The dataset covers over 6,000m2 and contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360° equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces. The dataset is available here: this http URL", "", "The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360° full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image region category classifier, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.", "We propose an algorithm to predict room layout from a single image that generalizes across panoramas and perspective images, cuboid layouts and more general layouts (e.g. \"L\"-shape room). Our method operates directly on the panoramic image, rather than decomposing into perspective images as do recent works. Our network architecture is similar to that of RoomNet [15], but we show improvements due to aligning the image based on vanishing points, predicting multiple layout elements (corners, boundaries, size and translation), and fitting a constrained Manhattan layout to the resulting predictions. Our method compares well in speed and accuracy to other existing work on panoramas, achieves among the best accuracy for perspective images, and can handle both cuboid-shaped and more general Manhattan layouts.", "We introduce the problem of scene viewpoint recognition, the goal of which is to classify the type of place shown in a photo, and also recognize the observer's viewpoint within that category of place. We construct a database of 360° panoramic images organized into 26 place categories. For each category, our algorithm automatically aligns the panoramas to build a full-view representation of the surrounding place. We also study the symmetry properties and canonical viewpoint of each place category. At test time, given a photo of a scene, the model can recognize the place category, produce a compass-like indication of the observer's most likely viewpoint within that place, and use this information to extrapolate beyond the available view, filling in the probable visual layout that would appear beyond the boundary of the photo." ], "cite_N": [ "@cite_22", "@cite_32", "@cite_3", "@cite_0", "@cite_11" ], "mid": [ "2586114507", "2902957614", "566730006", "2962717701", "2160398734" ] }
Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling
Inferring 3D information from 2D sensory data such as images and videos has long been a central research topic in computer vision. Conventional approach to build 3D models of a scene typically relies on detecting, matching, and triangulating local image features (e.g., patches, superpixels, edges, and SIFT features). Although significant progress has been made over the past decades, these methods still suffer from some fundamental problems. In particular, local feature detection is sensitive to a large number of factors such as scene appearance (e.g., textureless areas and repetitive patterns), lighting conditions, and occlusions. Further, the noisy, point cloud-based 3D model often fails to meet the increasing demand for high-level 3D understanding in real-world applications. When perceiving 3D scenes, humans are remarkably effective in using salient global structures such as lines, contours, planes, smooth surfaces, symmetries, and repetitive patterns. Thus, if a reconstruction algorithm can take advantage of such global information, it is natural to expect the algorithm to obtain more accurate results. Traditionally, however, it has been computationally challenging to reliably detect such global structures from noisy local image features. Recently, deep learning-based methods have shown promising results in detecting various forms of structure directly from the images, including lines [9], planes [15,28,12,30], cuboids [7], floorplans [14,13], room layouts [10,34,21], Table 1: An overview of structured 3D scene datasets. † : The actual numbers are not explicitly given and hard to estimate, because these datasets contain images downloaded from Internet (LSUN Room Layout, PanoContext), or from multiple sources (LayoutNet, Realtor360). * : Dataset is unavailable online at the time of submission. Datasets #Scenes #Rooms #Frames Annotated structure PlaneRCNN [12] --100,000 planes Wireframe [9] --5,462 wireframe (2D) SUN Primitive [27] --785 cuboids, other primitives LSUN Room Layout [33] n/a † 5,396 cuboid layout PanoContext [31] n/a † 500 (pano) cuboid layout LayoutNet [34] n/a † 1,071 (pano) cuboid layout Realtor360 * [29] n/a † 2,573 (pano) Manhattan layout Raster-to-Vector [14] 870 --floorplan Structured3D 3,500 21,835 196,515 "primitive + relationship" abstracted 3D shapes [22,25], and smooth surfaces [8]. With the fast development of deep learning methods comes the need for large amounts of accurately annotated data. In order to train the proposed neural networks, most prior work collects their own sets of images and manually label the structure of interest in them. Such a strategy has several shortcomings. First, due to the tedious process of manually labelling and verifying all the structure instances (e.g., line segments) in each image, existing datasets typically have limited sizes and scene diversity. And the annotations may also contain errors. Second, since each study primarily focuses on one type of structure, none of these datasets has multiple types of structure labeled. As a result, existing methods are unable to exploit relations between different types of structure (e.g., lines and planes) as humans do for effective, efficient, and robust 3D reconstruction. In this paper, we present a large synthetic dataset with rich annotations of 3D structure and photo-realistic 2D renderings of indoor man-made environments (Figure 1). At the core of our dataset design is a unified representation of 3D structure which enables us to efficiently capture multiple types of 3D structure in the scene. Specifically, the proposed representation considers any structure as relationship among geometric primitives. For example, a "wireframe" structure encodes the incidence and intersection relationship between line segments, whereas a "cuboid" structure encodes the rotational and reflective symmetry relationship among its planar faces. With our "primitive + relationship" representation, one can easily derive the ground truth annotations for a wide variety of semi-global and global structures (e.g., lines, wireframes, planes, regular shapes, floorplans, and room layouts), and also exploit their relations in future data-driven approaches (e.g., the wireframe formed by intersecting planar surfaces in the scene). To create a large-scale dataset with the aim to facilitate research on data-driven methods for structured 3D scene understanding, we leverage the availability of millions of professional interior designs and millions of production-level 3D object models -all coming with fine geometric details and high-resolution texture (Figure 1(a)). We first use computer programs to automatically extract information about 3D structure from the original house design files. As shown in Figure 1(b), our dataset contains rich annotations of 3D room structure including a variety of geometric primitives and relationships. To further generate photo-realistic 2D images (Figure 1(c)), we utilize industry-leading rendering engines to model the lighting conditions. Currently, our dataset consists of more than 196k images of 21,835 rooms in 3,500 scenes (i.e., houses). To showcase the usefulness and uniqueness of the proposed Structured3D dataset, we train deep networks for room layout estimation on a subset of the dataset. We show that the models first trained on our synthetic data and then fine-tuned on real data outperform the models trained on real data only. We also show good generalizability of the models trained on our synthetic data by directly applying them to real world images. In summary, the main contributions of this paper are: • We introduce a unified "primitive + relationship" representation for 3D structure. This representation enables us to efficiently capture a wide variety of semiglobal and global 3D structures, as well as their mutual relationships. • We create the Structured3D dataset, which contains rich ground truth 3D structure annotations of 21,835 rooms in 3,500 scenes, and more than 196k photorealistic 2D renderings of the rooms. • We verify the usefulness of our dataset by using it to train deep networks for room layout estimation and demonstrating improved performance on benchmark datasets. A Unified Representation of 3D Structure The main goal of our dataset is to provide rich annotations of ground truth 3D structure. A naive way to do so is generating and storing different types of 3D annotations in the same format as existing works, like wireframes as in [9], planes as in [12], floorplans as in [14], and so on. But this leads to a lot of redundancy. For example, planar surfaces in man-made environments are often bounded by a number of line segments, which are part of the wireframe. Even worse, by representing wireframes and planes separately, the relationships between them is also lost. In this paper, we present a unified representation of 3D structure in man-made environments, in order to minimize the redundancy in encoding multiple types of 3D structure, while preserving their mutual relationships. We show how most common types of structure previous studied in the literature (e.g., planes, cuboids, wireframes, room layouts, and floorplans) can be derived from our representation. We highlight a "room", a "balcony", and the "door" connecting them. Our representation of structure is largely inspired by the early work of Witkin and Tenenbaum [24], which characterizes structure as "a shape, pattern, or configuration that replicates or continues with little or no change over an interval of space and time". Accordingly, to describe any structure, we need to specify: (i) what pattern is continuing or replicating (e.g., a patch, an edge, or a texture descriptor), and (ii) the domain of its replication or continuation. In this paper, we call the former primitives and the latter relationships. The "Primitive + Relationship" Representation We now show how to describe a man-made environment using the "primitive + relationship" representation. For ease of exposition, we assume all objects in the scene can be modeled by piece-wise planar surfaces. But our representation can be easily extended to more general surfaces. An illustration of our representation is shown in Figure 3. Primitives Generally, a man-made environment consists of the following geometric primitives: • Planes P: We model the scene as a collection of planar surfaces P = {p 1 , p 2 , . . .} where each plane is described by its parameters p = {n, d}. • Lines L: When two planes intersect in the 3D space, a line is created. We use L = {l 1 , l 2 , . . .} to represent the set of all 3D lines in the scene. • Junction points X: When two lines meet in the 3D space, a junction point is formed. We use X = {x 1 , x 2 , . . .} to represent the set of all junction points. Relationships Next, we define some common types of relationships between the geometric primitives: • Plane-line relationships (R 1 ): We use a matrix W 1 to record all incidence and intersection relationships between planes in P and lines in L. Specifically, the ij-th entry of W 1 is 1 if l i is on p j , and 0 otherwise. Note that two planes are intersected at some line if and only if the corresponding entry in W T 1 W 1 is nonzero. • Line-point relationships (R 2 ): Similarly, we use a matrix W 2 to record all incidence and intersection relationships between lines in L and points in X. Specifically, the mn-th entry of W 2 is 1 if x m is on l n , and 0 otherwise. Note that two lines are intersected at some junction if and only if the corresponding entry in W T 2 W 2 is nonzero. • Cuboids (R 3 ): A cuboid is a special arrangement of plane primitives with rotational and reflection symmetry along x-, y-and z-axes. The corresponding symmetry group is the dihedral group D 2h . • Manhattan world (R 4 ): This is a special type of 3D structure commonly used for indoor and outdoor scene modeling. It can be viewed as a grouping relationship, in which all the plane primitives can be grouped into three classes, P 1 , P 2 , and P 3 , P = • Semantic objects (R 5 ): Semantic information is critical for many 3D computer vision tasks. It can be regarded as another type of grouping relationship, in which each semantic object instance corresponds to one or more primitives defined above. For example, each "wall", "ceiling", or "floor" instance is associated with one plane primitive; each "chair" instance is associated with a set of multiple plane primitives. Further, such a grouping is hierarchical. For example, we can further group one floor, one ceiling, and multiple walls to form a "living room" instance. And a "door" or a "window" is an opening which connects two rooms (or one room and the outer space). Note that the relationships are not mutually exclusive, in the sense that a primitive can belong to multiple relationship instances of same type or different types. For example, a plane primitive can be shared by two cuboids, and at the same time belong to one of the three classes in the Manhattan world model. Discussion The primitives and relationships we discussed above are just a few most common examples. They are by no means exhaustive. For example, our representation can be easily extended to included other primitives such as parametric surfaces. And besides cuboids, there are many other types of regular or symmetric shapes in man-made environments, where type corresponds to a different symmetry group. Relation to Existing Models Given our representation which contains primitives P = {P, L, X} and relationships R = {R 1 , R 2 , . . .}, we show how several types of 3D structure commonly studied in the literature can be derived from it. We again refer readers to Figure 2 for illustrations of these structures. Planes: A large volume of studies in the literature model the scene as a collection of 3D planes, where each plane is represented by its parameters and boundary. To generate such a model, we simply use the plane primitives P. For each p ∈ P, we further obtain its boundary by using matrix W 1 in R 1 to find all the lines in L that form an incidence relationship with p. Wireframes: A wireframe consists of lines L and junction points P, and their incidence and intersection relationships (R 2 ). Cuboids: This model is same as R 3 . Manhattan layouts: A Manhattan room layout model includes a "room" as defined in R 5 which also satisfies the Manhattan world assumption (R 4 ). Floorplans: A floorplan is a 2D vector representation which consists of a set of line segments and semantic labels (e.g., room types). To obtain such a vector representation, we can identify all lines in L and junction points in X which lie on a "floor" (as defined in R 5 ). To further obtain the semantic room labels, we can project all "rooms", "doors", and "windows" (as defined in R 5 ) to this floor. Abstracted 3D shapes: In addition to room structures, our representation can also be applied to individual 3D object models to create abstractions in the form of wireframes or cuboids, as described above. The Structured3D Dataset Our general, unified representation enables us to encodes a rich set of geometric primitives and relationships for structured 3D modeling. With this representation, our ultimate goal is to build a dataset which can be used to train machines to achieve the human-level understanding of the 3D environment. As a first step towards this goal, in this section, we describe our on-going effort to create a large-scale dataset of indoor scenes which include (i) ground truth 3D structure annotations of the scene and (ii) realistic 2D renderings of the scene. Note that in this work we focus on extracting ground truth annotations on the room structure only. We plan to extend our dataset to include 3D structure annotations of individual furniture models in the future. Extraction of Structured 3D Models To extract a "primitive + relationship" representation of the 3D scene, we make use of a large database of over one million house designs hand-crafted by professional designers. An example design is shown in Figure 4(a). All information of the design is stored in an industry-standard format in the database so that specifications about the geometry (e.g., the precise length, width, and height of each wall), textures and materials, and functions (e.g., which room the wall belongs to) of all objects can be easily retrieved. From the database, we have selected 3,500 house designs with about 21,854 rooms. We created a computer program to automatically extract all the geometric primitives associated with the room structure, which consists of the ceiling, The 3D models in SUNCG dataset [20] are created using Planner 5D [1], an online tool for amateur interior design. floor, walls, and openings (doors and windows). Given the precise measurements and associated information of these entities in the database, it is straightforward to generate all planes, lines, and junctions, as well as their relationships (R 1 and R 2 ). Since the measurements are highly accurate and noisefree, other types of relationship such a Manhattan world (R 3 ) and cuboids (R 4 ) can also be easily obtained by clustering the primitives, followed by a geometric verification process. Finally, to include semantic information (R 5 ) into our representation, we simply map the relevant labels provided by the professional designers to the geometric primitives in our representation. Figure 3 shows examples of the extracted geometric primitives and relationships. Photo-realistic 2D Rendering We have developed a photo-realistic renderer on top of Embree [23], an open-source collection of ray-tracing kernels for x86 CPUs. Our renderer uses the well-known path tracing [17] method, a Monte Carlo approach to approximating realistic Global Illumination (GI) for rendering. Each room is manually created by professional designers with over one million CAD models of furniture from world-leading manufacturers. These high-resolution furniture models are measured in real-world dimensions and being used in real production. A default lighting setup is also provided for each room. Figure 4 compares the 3D models in our database with those in the public SUNCG dataset [20], which are created using Planner 5D [1], an online tool for amateur interior design. At the time of rendering, a panorama or pin-hole camera is placed at random locations not occupied by objects in the room. We use 1024 × 512 resolution for panoramas and 640 × 480 for perspective images. Figure 5 shows example panoramas rendered by our engine. For each room, we generate a few different configurations (full, simple, and empty) by removing some or all the furniture. We also modify the lighting setup to generate images with different tem- Figure 6: Photo-realistic rendering vs. real-world decoration. We encourage readers to guess which column corresponds to real-world decoration. The answer is in the footnote 1 . perature. Besides images, our dataset also includes the corresponding depth maps and semantic labels, as they may be useful either as inputs to machine learning algorithms or to help multi-task learning. Figure 6 further illustrates the degree of photo-realism of our dataset, where we compare the rendered images with photos of real decoration guided by the design. We would like to emphasize the potential of our dataset in terms of extension capabilities. As we mentioned before, the unified representation enables us to include many other types of structure in the dataset. As for 2D rendering, depending on the application, we can easily simulate different effects such as lighting conditions, fisheye and novel camera designs, motion blur, and imaging noise. Furthermore, the dataset may be extended to include videos for applications like floorplan reconstruction [13] and visual SLAM [4]. Experiments To demonstrate the benefits of our new dataset, we use it to train deep neural networks for room layout estimation, an important task in structured 3D modeling. Experiment Setup Real dataset. We use the same dataset as LayoutNet [34]. The dataset consists of images from PanoContext [31] and 2D-3D-S [2], including 818 training images, 79 validation images, and 166 test images. Note that both datasets only provide cuboid-shape layout annotations. Our Structured3D dataset. In this experiment, we use a subset of panoramas with the original lighting and configuration. Each panorama corresponds to a different room in our dataset. We show statistics of different room layouts in 1 Right: real-world decoration. Table 2. Since current real dataset only contains cuboid-shape layout annotations (i.e., 4 corners), we choose 12k panoramic images with the cuboid-shape layout in our dataset. We split the images into 10k for training, 1k for validation, and 1k for testing. Evaluation metrics. Following [34,21], we adopt three standard metrics: i) 3D IoU: intersection over union between predicted 3D layout and the ground truth, ii) Corner Error (CE): Normalized 2 distance between predicted corner and ground truth, and iii) Pixel Error (PE): pixel-wise error between predicted plane classes and ground truth. Baselines. We choose two recent CNN-based approaches, LayoutNet [34] 2 and HorizonNet [21] 3 , based on their performance and source code availability. LayoutNet uses a CNN to predict a corner probability map and a boundary map from the panorama and vanishing lines, then optimizes the layout parameters based on network predictions. Hori-zonNet represents room layout as three 1D vectors, i.e., boundary positions of floor-wall, and ceiling wall, and existence of wall-wall boundary. It trains CNNs to directly predict the three 1D vectors. In this paper, we follow the default training setting of the respective methods and stop the training once the loss converges on the validation set. Experiment Results We have conduct several sets of experiments to measure the usefulness of our synthetic dataset. Impact of synthetic data. In this experiment, we train Lay-outNet and HorizonNet in three different manners: i) training only on our synthetic dataset ("s"), ii) training only on the real dataset ("r"), and iii) pre-training on our synthetic dataset, then fine-tuning on the real dataset ("s → r"). We adopt the training set of LayoutNet as the real dataset in this experiment. The results are shown in Table 3, in which we also include the numbers reported in the original papers ("r * "). As one can see, the use of synthetic data for pretraining can boost the performance of both networks. We refer readers to supplementary materials for more qualitative results. Performance vs. synthetic data size. We further study the relationship between the number of synthetic images used in pre-training and the accuracy on the real dataset. We sample 2.5k, 5k and 10k synthetic images for pre-training, then fine-tune the model on the real dataset. The results are shown in Table 4. As expected, using more synthetic data generally improves the performance. Table 4: Quantitative evaluation using varying synthetic data size in pre-training. The best and the second best results are boldfaced and underlined, respectively. Generalization to different domains. To compare the generalizability of the models trained on the synthetic dataset and the real dataset, we conduct experiments in two different configurations: i) training on our synthetic data, and ii) training on one real dataset. Then we test both models on the other real dataset. Note that the data used in LayoutNet is from two domains, i.e. PanoContext (PC) and 2D-3D-S. In this experiment, we use the two datasets separately. As shown in Table 5, when tested on PanoContext, the model trained on our data significantly outperforms the one trained on 2D-3D-S. When tested on 2D-3D-S, the model trained on our data is competitive with or slightly better than the one trained on PanoContext. Note that our dataset and PanoContext both focus on residential scenes, whereas images in 2D-3D-S are taken from office areas. Limitation of real datasets. Due to human errors, the annotation in real datasets is not always consistent with the actual room layout. In the left image of Figure 7, the room is a non-cuboid shape layout, but the ground truth layout is labeled as cuboid-shape. In the right image, the front wall is not labeled as ground truth. These examples illustrate the limitation of using real datasets as benchmarks. We avoid such errors in our dataset by automatically generating ground truth from the original design files. Methods Synthetic PanoContext 2D-3D-S Data Size 3D IoU (%) ↑ CE (%) ↓ PE (%) ↓ 3D IoU (%) ↑ CE (%) ↓ PE (%) ↓ Conclusion In this paper, we present Structured3D, a large synthetic dataset with rich ground truth 3D structure annotations and photo-realistic 2D renderings. We view this work as an important and exciting step towards building intelligent machines which can achieve human-level holistic 3D scene understanding: The unified "primitive+relationship" representation enables us to efficiently capture a wide variety of 3D structures and their relations, whereas the availability of millions of professional interior designs makes it possible to generate virtually unlimited amount of photo-realistic images and videos. In the future, we will continue to add more 3D structure annotations of the scenes and objects to the dataset, and explore novel ways to use the dataset to advance techniques for structured 3D modeling and understanding.
3,766
1907.13594
2965144741
Being motivated by ceiling inspection applications via unmanned aerial vehicles (UAVs) which require close proximity flight to surfaces, a systematic control approach enabling safe and accurate close proximity flight is proposed in this work. There are two main challenges for close proximity flights: (i) the trust characteristics varies drastically for the different distance from the ceiling which results in a complex nonlinear dynamics; (ii) the system needs to consider physical and environmental constraints to safely fly in close proximity. To address these challenges, a novel framework consisting of a constrained optimization-based force estimation and an optimization-based nonlinear controller is proposed. Experimental results illustrate that the performance of the proposed control approach can stabilize UAV down to 1 cm distance to the ceiling. Furthermore, we report that the UAV consumes up to 12.5 less power when it is operated 1 cm distance to ceiling, which is promising potential for more battery-efficient inspection flights.
The available approaches can handle the control of the flying robot when it does not engage with an interaction. However, the challenges associated with the aerodynamic interaction require the system to be more responsive, adaptive and resilient @cite_12 @cite_29 @cite_31 @cite_21 . This operation also brings system and environment based constraints including the level of the interaction. The available approaches that consider the constraints leverage individual multi-models for generic interaction problems which bring additional complexity @cite_5 . Moreover, nominal optimization-based approaches are considered in the UAV control for the interaction tasks, wherein the system lacks the ability to take external forces, changing parameters and unmodeled dynamics into account @cite_17 @cite_19 @cite_25 .
{ "abstract": [ "This paper presents a novel control algorithm to regulate the aerodynamic thrust produced by fixed-pitch rotors commonly used on small-scale electrically powered multirotor aerial vehicles. The proposed controller significantly improves the disturbance rejection and gust tolerance of rotor thrust control compared to state-of-the-art RPM (revolutions per minute) rotor control schemes. The thrust modeling approach taken is based on a model of aerodynamic power generated by a fixed-pitch rotor and computed in real time on the embedded electronic speed controllers using measurements of electrical power and rotor angular velocity. Static and dynamic flight tests were carried out in downdrafts and updrafts of varying strengths to quantify the resulting improvement in maintaining a desired thrust setpoint. The performance of the proposed approach in flight conditions is demonstrated by a path tracking experiment, where a quadrotor was flown through an artificial wind gust and the trajectory tracking error was measured. The proposed approach for thrust control demonstrably reduced the tracking error compared to classical RPM rotor control.", "This paper proposes the use of a novel control method based on IDA-PBC in order to address the Aerial Physical Interaction (APhI) problem for a quadrotor UAV. The apparent physical properties of the quadrotor are reshaped in order to achieve better APhI performances, while ensuring the stability of the interaction through passivity preservation. The robustness of the IDA-PBC method with respect to sensor noise is also analyzed. The direct measurement of the external wrench-needed to implement the control method-is compared to the use of a nonlinear Lyapunov-based wrench observer and advantages disadvantages of both methods are discussed. The validity and practicability of the proposed APhI method is evaluated through experiments, where for the first time in the literature, a lightweight all-in-one low-cost F T sensor is used onboard of a quadrotor. Two main scenarios are shown: a quadrotor responding external disturbances while hovering (physical human-quadrotor interaction), and the same quadrotor sliding with a rigid tool along an uneven ceiling surface (inspection painting-like task).", "This paper presents a nonlinear model predictive controller to follow desired 3D trajectories with the end effector of an unmanned aerial manipulator (i.e., a multirotor with a serial arm attached). To the knowledge of the authors, this is the first time that such controller runs online and on board a limited computational unit to drive a kinematically augmented aerial vehicle. Besides the trajectory following target, we explore the possibility of accomplishing other tasks during flight by taking advantage of the system redundancy. We define several tasks designed for aerial manipulators and show in simulation case studies how they can be achieved by either a weighting strategy, within a main optimization process, or a hierarchical approach consisting on nested optimizations. Moreover, experiments are presented to demonstrate the performance of such controller in a real robot.", "The challenge of aerial robotic contact-based inspection is the driving motivation of this paper. The problem is approached on both levels of control and path-planning by introducing algorithms and control laws that ensure optimal inspection through contact and controlled aerial robotic physical interaction. Regarding the flight and physical interaction stabilization, a hybrid model predictive control framework is proposed, based on which a typical quadrotor becomes capable of stable and active interaction, accurate trajectory tracking on environmental surfaces as well as force control. Convex optimization techniques enabled the explicit computation of such a controller which accounts for the dynamics in free-flight as well as during physical interaction, ensures the global stability of the hybrid system and provides optimal responses while respecting the physical limitations of the vehicle. Further augmentation of this scheme, allowed the incorporation of a last-resort obstacle avoidance mechanism at the control level. Relying on such a control law, a contact-based inspection planner was developed which computes the optimal route within a given set of inspection points while avoiding any obstacles or other no-fly zones on the environmental surface. Extensive experimental studies that included complex \"aerial-writing\" tasks, interaction with non-planar and textured surfaces, execution of multiple inspection operations and obstacle avoidance maneuvers, indicate the efficiency of the proposed methods and the potential capabilities of aerial robotic inspection through contact.", "In this paper, we consider the problem of multirotor flying robots physically interacting with the environment under wind influence. The result are the first algorithms for simultaneous online estimation of contact and aerodynamic wrenches acting on the robot based on real-world data, without the need for dedicated sensors. For this purpose, we investigate two model-based techniques for discriminating between aerodynamic and interaction forces. The first technique is based on aerodynamic and contact torque models, and uses the external force to estimate wind speed. Contacts are then detected based on the residual between estimated external torque and expected (modeled) aerodynamic torque. Upon detecting contact, wind speed is assumed to change very slowly. From the estimated interaction wrench, we are also able to determine the contact location. This is embedded into a particle filter framework to further improve contact location estimation. The second algorithm uses the propeller aerodynamic power and angular speed as measured by the speed controllers to obtain an estimate of the airspeed. An aerodynamics model is then used to determine the aerodynamic wrench. Both methods rely on accurate aerodynamics models. Therefore, we evaluate data-driven and physics based models as well as offline system identification for flying robots. For obtaining ground truth data we performed autonomous flights in a 3D wind tunnel. Using this data, aerodynamic model selection, parameter identification, and discrimination between aerodynamic and contact forces could be done. Finally, the developed methods could serve as useful estimators for interaction control schemes with simultaneous compensation of wind disturbances.", "This paper concentrates on design of a vision-based guidance command for aerial manipulation of a cylindrical object, using a stochastic model predictive approach. We first develop an image-based cylinder detection algorithm that utilizes a geometric characteristic of perspectively projected circles in 3D space. To enforce the object to be located inside sight of a camera, we formulate a visual servoing problem as a stochastic model predictive control (MPC) framework. By regarding x and y axes rotational velocities as stochastic variables, we guarantee the visibility of the camera considering underactuation of the system. We also provide experimental results that validate effectiveness of the proposed algorithm.", "In this work, we demonstrate that the position tracking performance of a quadrotor may be significantly improved for forward and vertical flight by incorporating simple lumped parameter models for induced drag and thrust, respectively, into the quadrotor dynamics and modifying the controller to compensate for these terms. We further show that the parameters for these models may be easily and accurately identified offline from forward and vertical flight data. We demonstrate that the simple drag compensating controller can reduce the position error in the direction of forward flight in steady state by 75 , and that the controller using a more accurate thrust model, dubbed the “refined” thrust model, can improve the position error by 72 in the vertical direction.", "This paper considers pick-and-place tasks using aerial vehicles equipped with manipulators. The main focus is on the development and experimental validation of a nonlinear model-predictive control methodology to exploit the multi-body system dynamics and achieve optimized performance. At the core of the approach lies a sequential Newton method for unconstrained optimal control and a high-frequency low-level controller tracking the generated optimal reference trajectories. A low cost quadrotor prototype with a simple manipulator extending more than twice the radius of the vehicle is designed and integrated with an on-board vision system for object tracking. Experimental results show the effectiveness of model-predictive control to motivate the future use of real-time optimal control in place of standard ad-hoc gain scheduling techniques." ], "cite_N": [ "@cite_29", "@cite_21", "@cite_19", "@cite_5", "@cite_31", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2579730571", "2913393696", "2743408048", "2219720796", "2898704444", "2739355872", "2742196814", "1561858273" ] }
Aerial Robot Control in Close Proximity to Ceiling: A Force Estimation-based Nonlinear MPC
In recent years, in-site UAV inspection has gained momentum as an application area in robotic research [1]. Typical inspection application requires a robot to achieve accurate motions in close proximity to the environment for long periods of measurement [2]. This is a challenging control task for conventional controllers such as PID controller, because of the strong cross-coupling between the UAV and surroundings. Although nonlinear controllers for such operating conditions are extensively studied for ground-effects, to our knowledge, there is not any controller systematically using force estimation in predictive control framework to achieve accurate control in close proximity to the ground as well as the ceiling. In this study, the system is modeled as (i) a baseline model, which is composed of second-order translational dynamics of UAV system; and (ii) an additive model, which summarizes the interactions of the UAV with its surrounding. The additive model is constructed using lumped external forces. In our proposed approach, these external forces are estimated using nonlinear moving horizon estimation (NMHE) and fed into the nonlinear model predictive controller (NMPC) to fully capture the effect of interaction on the system. Later, the 1 School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore. Email: koce0001@e.ntu.edu.sg, ttegoeh@ntu.edu.sg, mglseet@ntu.edu.sg. 2 Energy Research Institute @ NTU, Nanyang Technological University, 1 CleanTech Loop, Singapore. 3 Physical Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart 70569, Germany. Email: tiryaki@is.mpg.de. 4 School of Computer Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore. Email: mpratama@ntu.edu.sg. proposed controller is tested in close proximity to the ceiling. In summary, the following novelties are proposed in this work: • For the first time, an optimization-based framework consisting of force estimation-based nonlinear model predictive controller is investigated for the ceiling effect. • The power consumption of the aerial robot is analyzed for the proposed system in close proximity to the ceiling. Leveraging the above-listed key findings of this study, it is possible to prolong the flight duration as well as the battery life. Interested readers may refer to the work on the reduced current effect on the battery life for UAVs in [3]. III. PROBLEM FORMULATION A. Modeling A snapshot of the aerial robot in close proximity is presented in Fig. 1. In order to define the position of the aerial robot with respect to the world frame, the following transformation can be used Wẋ xẋ = R W B Bẋ xẋ,(1) where Wẋ xẋ and Bẋ xẋ are the translational states in the world frame and the body frame, respectively. R W B is the rotation matrix from the body to the world frame. The second order nonlinear translational dynamics of the aerial robot including external forces is expressed as M M M Bẍ x x − ω ω ω × Bẋ x x + G G G = F F F + F F F ext ,(2) where M M M = m · I I I 3 is the diagonal mass matrix and ω ω ω is the vector of angular rates. G G G is gravitational force acting on system in z direction [27]. The input vector is the force generated by the blades on quadrotor in the body frame, F F F = [0, 0, F z ] T . Furthermore, F F F ext ∈ R 3 is the external force vector acting on the system. B. Augmented Formulation Consider a nonlinear system, where the system is represented by the sum of a nominal and additive modelṡ x(t) = f n + f a ,(3a)f n = f x(t), u(t) , (3b) f a = F F F ext .(3c) In this representation, the external forces, F F F ext = [F extx , F exty , F extz ] T , represent unmodeled dynamics, disturbances, changing parameters as well as the external forces arising during the interaction phase. In this context, we can consider the following problem: Problem 1: In order to fly in close proximities, how can the system identify external forces precisely? If the system can accurately explore the external forces, how can this data be used within the controller? The state vector in the nominal case can be given in terms of the translational positions (x, y, z) and the velocities (u, v, w). However, to address the defined problem, we have augmented the nominal case as x = [x, y, z, u, v, w, F extx , F exty , F extz ] T ,(4) where x k ∈ X. The state constraint set X is closed, compact and includes the origin. In the augmented model representation, the external force vector is assumed to be a constant disturbance in the form ofḞ F F ext = 0. It is also assumed that the origin is included in the feasible set. Therefore, differentially flat states can be driven by the following input vector u = [F z , φ , θ , ψ] T ,(5) where u k ∈ U. The control constraint set U has the same properties with the X. The control vector includes angular positions (φ , θ , ψ). Therefore, the estimation problem can be formulated in discrete time as x k+1 = f x k , u k + w k ,(6a)y k = h x k , u k + ν k ,(6b) where the subscript k is the sample taken at time t k , where ∀k 0. The function f (·) is composed of discretized versions of (1) and (2). In order to obtain (6a), a direct multiple shooting method is utilized based on [28]. For this operation, Gauss-Legendre integrator of order 4 is preferred with 2 steps per shooting interval and the grid size is chosen 10 ms. Moreover, the physical system parameters are adopted from [29]. The process noise is indicated by w k ∈ R N x ×1 , where its covariance can be formulated by E(ww T ) = Qx ∈ R N x ×N x . In order to identify the external forces online, the following measurement function is used h(·) = [x, y, z, u, v, w, F z , φ , θ , ψ] T .(7) The measurement noise is represented by ν k ∈ R N y ×1 , where its covariance can be formulated by E(νν T ) = Rx ∈ R N y ×N y . Assumption 1: The noise vectors (w k and ν k ) are independent and normally-distributed random variables. IV. FORCE ESTIMATION Consider a constrained state estimation problem in the form of a squared norm using the data collected until the jth time step: min x k ,w k j ∑ k=0 ν k 2 V + j−1 ∑ k=0 w k 2 W + x 0 −x 0 2 P L (8a) s.t. x k+1 = f (x k , u k ) + w k , ∀k ∈ [0, . . . , j − 1] (8b) y k = h x k , u k + ν k , ∀k ∈ [0, . . . , j] (8c) x min x k x max (8d) where P L is a positive definite weight matrix to find a balance between initial guessx 0 and the initial state x 0 . The other positive definite matrices are the inverse of covariance matri- ces, where V = Q −1/2 x and W = R −1/2 x . Unfortunately, in this generic formulation, the problem may become intractable when the data size increases within time. In order to avoid the curse of dimensionality problem, we can impose a moving window by limiting the number of last measurements. In this context, the estimation window size N is considered, where L = j − N + 1. The problem in (8) can be reformulated as follows min x k ,w k j ∑ k=L y k − h x k , u k 2 V + j−1 ∑ k=L w k 2 W + x L −x L 2 P L (9a) s.t. x k+1 = f (x k , u k ) + w k , ∀k ∈ [0, . . . , j − 1] (9b) y k = h x k , u k + ν k , ∀k ∈ [0, . . . , j] (9c) x min x k x max (9d) wherex is the estimation value given by the arrival cost to approximate the past values until the first sample of the estimation window. In terms of the estimation problem, this term is similar to the initial guessx 0 in (8) since it is the first term of the estimation window. Similarly,x is the estimation value given by the moving horizon estimation. The arrival cost can be defined by Eq. (10) arg min where it can be solved with linearity assumptions, e.g., a Kalman filter. The solution of (10) is adopted from [30]. The output of (10) will bex L+1 and P L+1 for the next iteration in (9). The specified parameters for the estimation problem are summarized in Table I. x k ,w k L ∑ k=−∞ y k − h x k , u k 2 V + L−1 ∑ k=−∞ w k 2 W (10a) s.t. x k+1 = f (x k , u k ) + w k , ∀k ∈ [0, . . . , j − 1] (10b) y k = h x k , u k + ν k , ∀k ∈ [0, . . . , j](10c Assumption 2: The state function f (·), and the associated costs are continuous and differentiable. V. CONTROLLER DESIGN In this work, two staged feedback controller is implemented. In the first stage; the NMPC, which generates the force in z direction and attitude reference for the feedback controller, is used. In the second stage, a cascaded P and PID controller is used to generate desired momentum to be applied by rotors. Finally, the rotor speeds are calculated by control allocation matrix. The proposed approach is illustrated in Fig. 2. Consider an optimization-based control problem in the form of a squared norm: min x k ,u k ∞ ∑ k= j e x 2 Q + ∞ ∑ k= j e u 2 R (11a) s.t.x k = x k (11b) x k+1 = f (x k , u k ) (11c) x k = (x 0 , x 1 , . . . , x k−1 , x k , . . . ) (11d) u k = (u 0 , u 1 , . . . , u k−2 , u k−1 , . . . ) (11e) x min x k x max (11f) u min u k u max (11g) where e x = (x r k − x k ) and e u = (u n k − u k ) in which x r k is the trajectory reference for the system and u n k is the nominal control signal. The weight matrix Q is positive semidefinite and the weight matrix R is positive definite. These weight matrices might affect the performance of the system. However, for this set of controlled states by the infinite sequences of control actions, the problem may not be applicable due to the potential infinite dimensional optimization problem [31]. Similar to the NMHE case, the infinite dimensional problem can be defined in a receding horizon manner: min x k ,u k j+N−1 ∑ k= j e x 2 Q + e u 2 R + j+N ∑ k= j e x 2 S (12a) s.t. (11b), (11c), (11d), (11e), (11f), (11g) (12b) In this representation, the contribution of the states and control actions beyond the finite and moving horizon is approximated by the terminal cost. Similar to the stage cost weight, the weight matrix S is also positive semidefinite. The specified parameters for the controller are given in Table II. The proposed optimization-based approach is set using real-time iteration scheme in ACADO [32] and solved by qpOASES [33]. First, the self-contained C codes are generated by ACADO for the NMHE and NMPC. Afterward, these codes are integrated into the ROS-Kinetic environment. With the evaluations in the simulation environment (Gazebo), the system is tested in OptiTrack motion capture system, which localizes the aerial robot at 240 Hz over the wifi network. VI. EXPERIMENTAL RESULTS In order to show the effectiveness of our proposed approach, we performed a set of experiments. By leveraging the optimization-based force estimation and the optimizationbased controller, the quadrotor system is tested while approaching the ceiling. A. Experimental Setup We used a quadrotor platform for the experiments and the scheme for the test procedure can be seen in Fig. 3. It is a small scale quadrotor (DJI F450) and its subcomponents can be seen in Fig. 4. This experimental setup is equipped with a PX4 FMU and a Raspberry Pi 3 onboard computer unit. While the Raspberry Pi 3 is responsible for the higher-level tasks (commanding generated throttle and angular positions), the PX4 FMU (Firmware v1.6.5) handles the attitude setpoint tracking as well as reaching required vertical force values. For the serial connection between onboard computer and PX4, an FTDI cable is used. In the experiments, the PX4 unit's attitude controller is used. B. Close Proximity Flight Performance Comparison In order to evaluate the performance of the proposed controller statistically, we have compared its performance with PID and NMPC without force estimation in a scenario, where the robot is commanded to stabilize itself under a ceiling in different proximities. The deviation from the desired distance to ceiling is measured for three controllers and then is plotted in Fig. 5. The results show that the level of interaction starts affecting the performance of the PID and NMPC while approaching the ceiling. They could not bring the robot back to the reference point. The performance of these controllers further deteriorates as the robot approaches the ceiling and eventually they fail to overcome the suction force and the robot stick on the ceiling. On the other hand, the proposed force estimation-based nonlinear MPC approach manages to keep the robot even under 1 cm below the ceiling without experiencing any sticking. C. Evaluation of the Battery Performance in Close Proximity We investigated battery currents and voltages measured by Pixhawk to evaluate the power efficiency of the flight. In this analysis, we excluded PID and NMPC since they already fail to stay in close proximity interaction zones. The average power consumption is calculated as P ave = 1 T t=T t=0 v(t)i(t)dt,(13) where T is the duration of measurement for each distance, and v(t) and i(t) are battery voltage and drawn current at time t, respectively. The experimental average power consumption results are summarized in Fig. 6 for different proximities. It is observed that the power consumption of the system decreases up to 12.5% in close proximity flight. Considering the declined power demand of the UAV, we expect that the flight duration can be longer when the system flies below the ceiling. D. Further Controller Performance Analysis In the flight tests, a random point in the air is set for the aerial robot. After this hovering phase, a reference is generated to go below the ceiling. In order to evaluate system performance, different proximities are defined. For this set of tests, the distance is measured from the top of the Raspberry Pi, as it is illustrated in Fig. 3. The distance between the propellers and the Raspberry Pi is 6 cm. In the first experiment, the system needs to stay by keeping its orientation below 16 cm from the ceiling. In Fig. 7, the system response on the tracking (Fig. 7a), controller action (Fig. 7b) and the force estimation (Fig. 7c) can be seen. The first instant of the green region indicates the switching mechanism, where the force estimation-based NMPC is activated. In the first part of the figures, the nominal model-based NMPC is used. As it can be seen, the controller can bring the system to the defined reference when the additive model is leveraged. Similar to the first case, the system is also tested below 11 cm from the ceiling. When the switching mechanism is activated, the proposed approach actively suppress the ceiling effect. It is noted that the disturbance on the system becomes more dominant while it flies within 10 cm range as can be seen from Fig. 8. The system performance below 6 cm from the ceiling is given in Fig. 9. The active force estimation-based NMPC mitigates the ceiling effect. As compared to Fig. 7b and Fig. 8b, the controller effort is decreased. Since the ceiling effect increases the rotor wake which results in an increase in the thrust; to stay in close proximities to the ceiling, the system does not need to generate the same thrust when hovering in the free flight case One of the extreme cases, i.e., staying 1 cm below the ceiling, is tested in this implementation as can be seen in Fig. 10. The system still handles the ceiling effect in order not to be in permanent physical contact with the ceiling. Since the battery state is observed online in this implementation, it is explored that the current drawn from the battery can be decreased significantly during the flight in very close proximities (up to 15.8 %). The average computation time of the NMPC is approximately 1.98 ms. When it is switched to the Force-NMPC, there is an increasing trend while the system approaches the ceiling (from 1.93 ms to 2 ms). The average computation time of the NMHE is around 3.35 ms. A similar rise is observed in the NMHE case, where the computation time changed between 3.29 ms to 3.38 ms while approaching the ceiling. VII. DISCUSSION AND CONCLUSION In this work, we presented a force estimation based nonlinear MPC approach for operating quadrotors within close proximity to the surrounding. Our framework generates attitude angles and vertical force values that satisfy the dynamic behavior of the system together with the physical limits. Our algorithm is applicable in real-time and all the computations stay below 10 ms. We validated our approach experimentally in real-time using a small-scale quadrotor platform. In the dual problem, the proposed estimation approach explores the defects (external forces, disturbances, unmodelled dynamics, and modeling mismatch) in the model leveraged within the controller. To this end, a suitable dynamical model for both free-flying and the interaction cases has been presented, together with an optimization framework for generating optimal motion reactions in close proximity. Our approach is agnostic to the information of the environment (e.g., distance to the ceiling), and hence eliminates the need for dedicated proximity or wind sensors and precise mathematical model. One main limitation of this work is to use the motion capture system, which provides precise motion information. The estimation-approach identifies the external forces online after each new pose measurement is available. For the insitu inspection operation, we plan to adopt visual-inertial odometry methods for pose estimation in our future work. There are several potential extensions of the proposed work. First of all, we intend to test our controller algorithm in more challenging environments with more complex interactions such as side walls and ceilings with stalactitelike structures. Second, the proposed approach is similar to the data-driven perspective. A ceiling effect model may be further explored with data collection. Third, the energyaspects can be included in the cost function by mapping the current drawn from the batteries and the generated vertical forces to create a controller that is capable of prolonging flight duration and battery aging awareness [34]. This can allow to plan and execute efficient task-based trajectories to increase the flight envelope.
3,285