text
stringlengths
8
5.77M
hf_score
int64
0
5
fasttext_score_v2
float64
0
2
dolma_score
float64
-0
1
HTC's Vive Pro headset is available to pre-order for $799 We've seen plenty of Beats-focused KIRFs in our time, some better than others. Few, however, play quite so directly on the name as OrigAudio's Beets. For $25, adopters get a set of headphones that bear little direct resemblance to Dr. Dre's audio gear of choice, but are no doubt bound to impress friends -- at least, up until they see a root vegetable logo instead of a lower-case B. Thankfully, there's more to it than just amusing and confusing peers. Every purchase will lead to a donation of canned beets (what else?) to the Second Harvest Food Bank of Orange County. For us, that's reason enough to hope that Beats doesn't put the kibosh on OrigAudio's effort. Besides, we could use some accompaniment for our BeetBox.
1
0.372821
0.128281
Q: NullPointerException in getview of custom adapter I'm getting image from bitmap method and trying to populate the listview. But when i call the bitmap function inside getview the nullpointerException error occurs. please help me... here is my view Activity class: public class Viewactivity extends Activity{ TextView tv; ImageView im; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.views); ListView mListView = (ListView)findViewById(R.id.listView); //array houlds all images int Images[] = new int[]{ R.drawable.confidential, ... }; //array holds all strings to be drawn in the image CustomList adaptor = new CustomList(this , Images); mListView.setAdapter(adaptor); } public Bitmap ProcessingBitmap(int image) { // TODO Auto-generated method stub Bitmap bm1 = null; Bitmap newBitmap = null; final String data =getIntent().getExtras().getString("keys"); bm1 = ((BitmapDrawable) Viewactivity.this.getResources() .getDrawable(image)).getBitmap(); Config config = bm1.getConfig(); if(config == null){ config = Bitmap.Config.ARGB_8888; } newBitmap = Bitmap.createBitmap(bm1.getWidth(), bm1.getHeight(),config); Canvas newCanvas = new Canvas(newBitmap); newCanvas.drawBitmap(bm1, 0, 0, null); if(data != null){ Paint paintText = new Paint(Paint.ANTI_ALIAS_FLAG); paintText.setColor(Color.RED); paintText.setTextSize(300); // paintText.setTextAlign(Align.CENTER); paintText.setStyle(Style.FILL); paintText.setShadowLayer(10f, 10f, 10f, Color.BLACK); Rect rectText = new Rect(); paintText.getTextBounds(data, 0, data.length(), rectText); paintText.setTextScaleX(1.f); newCanvas.drawText(data, 0, rectText.height(), paintText); Toast.makeText(getApplicationContext(), "drawText: " + data, Toast.LENGTH_LONG).show(); }else{ Toast.makeText(getApplicationContext(), "caption empty!", Toast.LENGTH_LONG).show(); } return newBitmap; } } this is my adapter class: public class CustomList extends BaseAdapter{ Viewactivity act; int[] IMAGES; LayoutInflater inflator; Context sContext; //private String[] TEXTS; public CustomList(Context context, int[] images){ this.IMAGES = images; //this.TEXTS = texts; this.sContext = context; inflator = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); } @Override public int getCount() { // TODO Auto-generated method stub return IMAGES.length; } @Override public Object getItem(int position) { // TODO Auto-generated method stub return position; } @Override public long getItemId(int position) { // TODO Auto-generated method stub return position; } @Override public View getView(int position, View convertView, ViewGroup parent) { // TODO Auto-generated method stub View v = inflator.inflate(R.layout.row_list, parent, false); final ImageView imageView = (ImageView) v.findViewById(R.id.imageView); imageView.setImageBitmap(act.ProcessingBitmap(IMAGES[position]));// line no:52 return imageView; } } this is my logcat: 12-18 06:16:51.406: E/AndroidRuntime(1388): FATAL EXCEPTION: main 12-18 06:16:51.406: E/AndroidRuntime(1388): Process: com.emple.example, PID: 1388 12-18 06:16:51.406: E/AndroidRuntime(1388): java.lang.NullPointerException 12-18 06:16:51.406: E/AndroidRuntime(1388): at com.emple.example.CustomList.getView(CustomList.java:52) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.AbsListView.obtainView(AbsListView.java:2263) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.ListView.measureHeightOfChildren(ListView.java:1263) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.ListView.onMeasure(ListView.java:1175) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.RelativeLayout.measureChild(RelativeLayout.java:689) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.RelativeLayout.onMeasure(RelativeLayout.java:473) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.FrameLayout.onMeasure(FrameLayout.java:310) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125) 12-18 06:16:51.406: E/AndroidRuntime(1388): at com.android.internal.widget.ActionBarOverlayLayout.onMeasure(ActionBarOverlayLayout.java:327) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:5125) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.widget.FrameLayout.onMeasure(FrameLayout.java:310) 12-18 06:16:51.406: E/AndroidRuntime(1388): at com.android.internal.policy.impl.PhoneWindow$DecorView.onMeasure(PhoneWindow.java:2291) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.View.measure(View.java:16497) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl.performMeasure(ViewRootImpl.java:1916) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl.measureHierarchy(ViewRootImpl.java:1113) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1295) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1000) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:5670) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.Choreographer$CallbackRecord.run(Choreographer.java:761) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.Choreographer.doCallbacks(Choreographer.java:574) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.Choreographer.doFrame(Choreographer.java:544) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:747) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.os.Handler.handleCallback(Handler.java:733) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.os.Handler.dispatchMessage(Handler.java:95) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.os.Looper.loop(Looper.java:136) 12-18 06:16:51.406: E/AndroidRuntime(1388): at android.app.ActivityThread.main(ActivityThread.java:5017) 12-18 06:16:51.406: E/AndroidRuntime(1388): at java.lang.reflect.Method.invokeNative(Native Method) 12-18 06:16:51.406: E/AndroidRuntime(1388): at java.lang.reflect.Method.invoke(Method.java:515) 12-18 06:16:51.406: E/AndroidRuntime(1388): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:779) 12-18 06:16:51.406: E/AndroidRuntime(1388): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:595) 12-18 06:16:51.406: E/AndroidRuntime(1388): at dalvik.system.NativeStart.main(Native Method) 12-18 06:21:51.616: I/Process(1388): Sending signal. PID: 1388 SIG: 9 A: You haven't initialized your act variable. Init it in your adapter constructor. Something like: public CustomList(Viewactivitty act, int[] images){ this.act = act; this.IMAGES = images; //this.TEXTS = texts; this.sContext = act; inflator = (LayoutInflater)act.getSystemService(Context.LAYOUT_INFLATER_SERVICE); }
1
0.085405
0.904787
Syringocystadenoma papilliferum of the cervix presenting as vulvar growth in an adolescent girl. Syringocystadenoma papilliferum (SCP) is a rare, benign, adnexal tumour of apocrine or eccrine differentiation. It is commonly located on head and neck region. We report the case of an 18-year-old woman who presented with a vulvar lobulated growth that was found to arise from the posterior lip of cervix. Histopathological examination revealed the diagnosis of SCP. To our knowledge, SCP arising from the cervix has never been reported previously in the literature, thus we believe this to be the first case of SCP arising from the posterior lip of the cervix.
2
1.75109
0.949674
The basic goal of the effective altruism movement is to create efficient philanthropic change by backing programs and innovations that are cost-effective so that each dollar given impacts as many people as possible. The underlying tenet is that donor dollars are a limited resource, but dollars are just one of the limiting factors. There’s still another major resource that needs to be accounted for: research time. There’s a learning curve for calculation-driven cause groups (and donors) to figure out what world-plaguing problems really are the most pressing, what solutions seem the most promising or neglected, and what else might need to be done. The problem is there hasn’t been a single resource for accessing all this information in one place. To change that, Rethink Priorities, an initiative of the effective altruism awareness and engagement building nonprofit Rethink Charity, has launched Priority Wiki, a publicly editable Wikipedia-like online encyclopedia for cause prioritization wonks. It collects and categorizes vetted research around pressing charitable causes and potential interventions. “This is a big problem because thousands of hours are going into this kind of research, and you don’t want people to forget it exists, or maybe try to duplicate efforts, or just not even remember it,” says Peter Hurford, who codeveloped the wiki alongside colleague Marcus Davis. “We’re trying to capture all relevant research under a wide variety of global issues so that everyone can have a go-to spot to get up to speed.” To do that, Wiki is organized into six broad types of causes. That includes “Existential/Catastrophic Future Risks,” “Improving Research,” “Decisions and Values,” “Improving Policy,” “Developing World Health and Economic Development,” “Developed World Health and Economic Development,” and “Specific Scientific Research.” Each entry is then comprised of related topics. Under the catastrophe heading, for instance, there’s biosecurity, nuclear security, climate change, and geomagnetic storms. As the developers explain in an open letter about their efforts, the wiki is currently populated with a collection of research by effective altruism research organizations including Open Philanthropy, GiveWell, 80,000 Hours, and Animal Charity Evaluators. Many of these are formatted in what’s commonly referred to as a “shallow review,” or high-level overview of each issue, and various important statistics and findings. “That gives you a lot of opportunities to dive into the problem and make a more structured way than dumping someone a 60-item reading list,” says Hurford. Contributors are already revising the content and sharing data about things the originators hadn’t considered. Two recent additions include information about psychedelics and drug reform, and how to prevent or reduce aging-related diseases to extend our natural lifespan.
2
1.53085
0.597876
Essays Philosophers who think everyday morality is objective should examine the evidence, argues Joshua Knobe. Imagine two people discussing a question in mathematics. One of them says “7,497 is a prime number,” while the other says, “7,497 is not a prime number.” In a case like this one, we would probably conclude that there can only be a single right answer. We might have a lot of respect for both participants in the conversation, we might agree that they are both very reasonable and conscientious, but all the same, one of them has got to be wrong. The question under discussion here, we might say, is perfectly objective. But now suppose we switch to a different topic. Two people are talking about food. One of them says “Don’t even think about eating caterpillars! They are totally disgusting and not tasty at all,” while the other says “Caterpillars are a special delicacy – one of the tastiest, most delectable foods a person can ever have occasion to eat.” In this second case, we might have a very different reaction. We might think that there isn’t any single right answer. Maybe caterpillars are just tasty for some people but not for others. This latter question, we might think, should be understood as relative. Now that we’ve got at least a basic sense for these two categories, we can turn to a more controversial case. Suppose that the two people are talking about morality. One of them says “That action is deeply morally wrong,” while the other is speaking about the very same action and says “That action is completely fine – not the slightest thing to worry about.” In a case like this, one might wonder what reaction would be most appropriate. Should we say that there is a single right answer and anyone who says the opposite must be mistaken, or should we say that different answers could be right for different people? In other words, should we say that morality is something objective or something relative? This is a tricky question, and it can be difficult to see how one might even begin to address it. Faced with an issue like this one, where exactly should we look for evidence? Though philosophers have pursued numerous approaches here, one of the most important and influential is to begin with certain facts about people’s ordinary moral practices. The idea is that we can start out with facts about people’s usual ways of thinking or talking and use these facts to get some insight into questions about the true nature of morality. Thinkers who take this approach usually start out with the assumption that ordinary thought and talk about morality has an objectivist character. For example, the philosopher Michael Smith claims that we seem to think moral questions have correct answers; that the correct answers are made correct by objective moral facts; that moral facts are wholly determined by circumstances and that, by engaging in moral conversation and argument, we can discover what these objective moral facts determined by the circumstances are. And Frank Jackson writes: I take it that it is part of current folk morality that convergence will or would occur. We have some kind of commitment to the idea that moral disagreements can be resolved by sufficient critical reflection – which is why we bother to engage in moral debate. To that extent, some sort of objectivism is part of current folk morality. Then, once one has in hand this claim about people’s ordinary understanding, the aim is to use it as part of a complex argument for a broader philosophical conclusion. It is here that philosophical work on these issues really shines, with rigorous attention to conceptual distinctions and some truly ingenious arguments, objections and replies. There is just one snag. The trouble is that no real evidence is ever offered for the original assumption that ordinary moral thought and talk has this objective character. Instead, philosophers tend simply to assert that people’s ordinary practice is objectivist and then begin arguing from there. If we really want to go after these issues in a rigorous way, it seems that we should adopt a different approach. The first step is to engage in systematic empirical research to figure out how the ordinary practice actually works. Then, once we have the relevant data in hand, we can begin looking more deeply into the philosophical implications – secure in the knowledge that we are not just engaging in a philosophical fiction but rather looking into the philosophical implications of people’s actual practices. Just in the past few years, experimental philosophers have been gathering a wealth of new data on these issues, and we now have at least the first glimmerings of a real empirical research program here. But a funny thing happened when people started taking these questions into the lab. Again and again, when researchers took up these questions experimentally, they did not end up confirming the traditional view. They did not find that people overwhelmingly favoured objectivism. Instead, the results consistently point to a more complex picture. There seems to be a striking degree of conflict even in the intuitions of ordinary folks, with some people under some circumstances offering objectivist answers, while other people under other circumstances offer more relativist views. And that is not all. The experimental results seem to be giving us an ever deeper understanding of why it is that people are drawn in these different directions, what it is that makes some people move toward objectivism and others toward more relativist views. For a nice example from recent research, consider a study by Adam Feltz and Edward Cokely. They were interested in the relationship between belief in moral relativism and the personality trait openness to experience. Accordingly, they conducted a study in which they measured both openness to experience and belief in moral relativism. To get at people’s degree of openness to experience, they used a standard measure designed by researchers in personality psychology. To get at people’s agreement with moral relativism, they told participants about two characters – John and Fred – who held opposite opinions about whether some given act was morally bad. Participants were then asked whether one of these two characters had to be wrong (the objectivist answer) or whether it could be that neither of them was wrong (the relativist answer). What they found was a quite surprising result. It just wasn’t the case that participants overwhelmingly favoured the objectivist answer. Instead, people’s answers were correlated with their personality traits. The higher a participant was in openness to experience, the more likely that participant was to give a relativist answer. Geoffrey Goodwin and John Darley pursued a similar approach, this time looking at the relationship between people’s belief in moral relativism and their tendency to approach questions by considering a whole variety of possibilities. They proceeded by giving participants mathematical puzzles that could only be solved by looking at multiple different possibilities. Thus, participants who considered all these possibilities would tend to get these problems right, whereas those who failed to consider all the possibilities would tend to get the problems wrong. Now comes the surprising result: those participants who got these problems right were significantly more inclined to offer relativist answers than were those participants who got the problems wrong. Taking a slightly different approach, Shaun Nichols and Tricia Folds-Bennett looked at how people’s moral conceptions develop as they grow older. Research in developmental psychology has shown that as children grow up, they develop different understandings of the physical world, of numbers, of other people’s minds. So what about morality? Do people have a different understanding of morality when they are twenty years old than they do when they are only four years old? What the results revealed was a systematic developmental difference. Young children show a strong preference for objectivism, but as they grow older, they become more inclined to adopt relativist views. In other words, there appears to be a developmental shift toward increasing relativism as children mature. (In an exciting new twist on this approach, James Beebe and David Sackris have shown that this pattern eventually reverses, with middle-aged people showing less inclination toward relativism than college students do.) So there we have it. People are more inclined to be relativists when they score highly in openness to experience, when they have an especially good ability to consider multiple possibilities, when they have matured past childhood (but not when they get to be middle-aged). Looking at these various effects, my collaborators and I thought that it might be possible to offer a single unifying account that explained them all. Specifically, our thought was that people might be drawn to relativism to the extent that they open their minds to alternative perspectives. There could be all sorts of different factors that lead people to open their minds in this way (personality traits, cognitive dispositions, age), but regardless of the instigating factor, researchers seemed always to be finding the same basic effect. The more people have a capacity to truly engage with other perspectives, the more they seem to turn toward moral relativism. To really put this hypothesis to the test, Hagop Sarkissian, Jennifer Wright, John Park, David Tien and I teamed up to run a series of new studies. Our aim was to actually manipulate the degree to which people considered alternative perspectives. That is, we wanted to randomly assign people to different conditions in which they would end up thinking in different ways, so that we could then examine the impact of these different conditions on their intuitions about moral relativism. Participants in one condition got more or less the same sort of question used in earlier studies. They were asked to imagine that someone in the United States commits an act of infanticide. Then they were told to suppose that one person from their own college thought that this act was morally bad, while another thought that it was morally permissible. The question then was whether they would agree or disagree with the following statement: Since your classmate and Sam have different judgments about this case, at least one of them must be wrong. Participants in the other conditions received questions aimed at moving their thinking in a different direction. Those who had been assigned to the “other culture” condition were told to imagine an Amazonian tribe, the Mamilons, which had a very different way of life from our own. They were given a brief description of this tribe’s rituals, values and modes of thought. Then they were told to imagine that one of their classmates thought that the act of infanticide was morally bad, while someone from this Amazonian tribe thought that the act was morally permissible. These participants were then asked whether they agreed or disagreed with the corresponding statement: Since your classmate and the Mamilon have different judgments about this case, at least one of them must be wrong. Finally, participants in the “extraterrestrial” condition were told about a culture that was just about as different from our own as can possibly be conceived. They were asked to imagine a race of extraterrestrial beings, the Pentars, who have no interest in friendship, love or happiness. Instead, the Pentars’ only goal is to maximise the total number of equilateral pentagons in the universe, and they move through space doing everything in their power to achieve this goal. (If a Pentar becomes too old to work, she is immediately killed and transformed into a pentagon herself.) As you might guess, these participants were then told to imagine a Pentar who thinks that the act of infanticide is morally permissible. Then came the usual statement: Since your classmate and the Pentar have different judgments about this case, at least one of them must be wrong. The results of the study showed a systematic difference between conditions. In particular, as we moved toward more distant cultures, we found a steady shift toward more relativist answers – with people in the first condition tending to agree with the statement that at least one of them had to be wrong, people in the second being pretty evenly split between the two answers, and people in the third tending to reject the statement quite decisively. Note that all participants in the study are considering judgments about the very same act. There is just a single person, living in the United States, who is performing an act of infanticide, and participants are being asked to consider different judgments one might make about that very same act. Yet, when participants are asked to consider individuals who come at the issue from wildly different perspectives, they end up concluding that these individuals could have opposite opinions without either of them being in any way wrong. This result seems strongly to suggest that people can be drawn under certain circumstances to a form of moral relativism. But now we face a new question. If we learn that people’s ordinary practice is not an objectivist one – that it actually varies depending on the degree to which people take other perspectives into account – how can we then use this information to address the deeper philosophical issues about the true nature of morality? The answer here is in one way very complex and in another very simple. It is complex in that one can answer such questions only by making use of very sophisticated and subtle philosophical methods. Yet, at the same time, it is simple in that such methods have already been developed and are being continually refined and elaborated within the literature in analytic philosophy. The trick now is just to take these methods and apply them to working out the implications of an ordinary practice that actually exists. Share This Joshua Knobe is an associate professor at Yale University, affiliated both with the Program in Cognitive Science and the Department of Philosophy.
3
1.089283
0.711262
Getting the DID number from a CallCentric SIP trunk for FreePBX I’ve got a few DDI numbers from CallCentric all around the world (UK, US, Australia) and couldn’t figure our how to setup an ‘Inbound Route’ in FreePBX that used the number that had been dialled to route the call. It turns out that you need to extract the number from the ‘SIP header’ information and there’s no setting in FreePBX to do this so it means hacking at the Asterisk config files just a little. There are a few methods for doing this but these instructions should work for FreePBX/Asterisk – When setting up your ‘SIP trunk’ in FreePBX under ‘PEER DETAILS’ you want to put the line – “context=custom-get-did-from-sip” then you need to edit the file /etc/asterisk/extensions_custom.conf and add the following lines –
1
1.202967
0.761216
Introduction ============ Blood-borne pathogens first encounter the adaptive immune system in the marginal zone region of the spleen where the convergence of innate and adaptive immune mechanisms insures an early and effective response to pathogen antigens ([@bib1], [@bib2]). Both thymic-independent and -dependent responses are elicited in response to infection ([@bib1], [@bib3]). The thymic-independent response involves the targeting and activation of marginal zone B cells (MZBs)[\*](#fn1){ref-type="fn"}through their interaction with the repetitive antigenic determinants of pathogens with complement and B cell antigen receptors ([@bib4], [@bib5]). In contrast, the thymic-dependent Ab response is driven by the interaction and reciprocal stimulation of APCs, T lymphocytes, and B cells. The organization of the splenic white pulp nodule into discrete zones enriched for either B cells, T cells, or APCs provide a spatial microenvironment that facilitates an efficient interaction of pathogens with the various cellular populations required for insuring an efficient immune response ([@bib6]--[@bib8]). Antigen presentation and stimulation of T and B cells ultimately results in the formation of germinal centers, high affinity neutralizing Abs, and memory cells. Recent reports have begun to define the cellular components and molecular signals that are necessary to establish the marginal zone. B cell intrinsic pathways have been described involving specific chemokines and their receptors, molecules involved in B cell activation, as well as adhesion molecules and their ligands ([@bib9], [@bib10]). Apart from the MZB, the other predominant cell of the marginal zone is the marginal zone macrophage (MZMO), which is distinct from the metallophilic macrophage, defined by the marker MOMA-1, located at the border of the marginal and follicular zone ([@bib11]). The MZMO is defined by its location, interspersed in several layers within the marginal zone, and by its expression of the markers MARCO and ER-TR9 ([@bib12], [@bib13]). The former molecule is a scavenger receptor belonging structurally to the class A receptor family whereas the latter is identical to the C-type lectin SIGN-RI ([@bib14]--[@bib17]). MARCO has been shown to bind a range of microbial Ags including *Staphylococcus aureus* and *Escherichia coli* whereas SIGN-RI is the predominant receptor for uptake of polysaccharide dextran by MZMOs. Even though both MZBs and MZMOs are implicated in both thymus-dependent and -independent immune responses, the exact roles of the two cell types in initiation of the response to blood-borne pathogens is not known. We now define a unique role for the MZMO in regulation of MZB retention and activation and show that movement of this subset of macrophages to the red pulp of the spleen involves signaling via SH2-containing inositol-5-phosphatase 1 (SHIP) and Bruton\'s tyrosine kinase (Btk). In addition, we show a direct interaction between MZMOs and MZBs via the MARCO receptor on MZMOs and a ligand on MZBs. Materials and Methods ===================== Mice. ----- C57BL/6 mice obtained from The Jackson Laboratory were used as WT mice and controls unless otherwise stated. Founders of SHIP-deficient mice were provided by G. Krystal (Terry Fox Laboratory, BC Cancer Agency, Vancouver, Canada; reference [@bib18]) and Btk-deficient mice were purchased from The Jackson Laboratory. Op/op mice were provided by J. Pollard (Albert Einstein College of Medicine, New York, NY) and LysMCre transgenic mice ([@bib19]) were provided by I. Forster (Technical University of Munich, Germany). Abs and bacteria was injected i.v. in the tail vein and all experiments involving mice were performed in accordance with National Institutes of Health (NIH) guidelines. All mice were maintained under specific pathogen-free conditions at The Rockefeller University. Antibodies and Reagents. ------------------------ For histological examination 6-μM frozen sections were stained, and for FACS^®^ analysis erythrocyte-depleted spleen cells were used. Macrophages were detected using MOMA-1, MARCO Abs from Serotec, and ER-TR9 from Accurate Chemical & Scientific Corp. Abs to CD1d, B220, CD19, CD21/CD35 (CRI/II), CD23, MAC-1, anti--rat alkaline phosphatase, and anti--rabbit horseradish peroxidase were from BD Biosciences. Secondary Abs for immunohistochemistry, anti-biotin, anti-FITC F(ab′) horseradish peroxidase, or alkaline phosphatase were from DakoCytomation and rabbit anti--SHIP used for Western blot was from Upstate Biotechnology. Vector Blue Alkaline Phosphatase Substrate from Vector Laboratories and DAB peroxidase substrate from Sigma-Aldrich were used for development of immunohistochemistry stains. Soluble MARCO receptor was provided by T. Pikkarainen (The Karolinska Institute, Stockholm, Sweden; reference [@bib20]) and was biotinylated using the EZ-Link™ kit from Pierce Chemical Co. The biotinylated soluble MARCO was detected using Streptavidin-CyChrome™ from BD Biosciences. *S. aureus* fluorescent bioparticles were purchased from Molecular Probes, Inc. and MACS anti-FITC and anti-biotin beads were from Miltenyi Biotec. Cl~2~MDP (or clodronate) and PBS liposomes were provided by Roche Diagnostics. Conditional Targeting of SHIP. ------------------------------ Floxed SHIP mice were created by insertion of loxP sites flanking the 10th and 11th exons (see [Fig. 2](#fig2){ref-type="fig"} a) of the SHIP gene. The targeting vector was introduced into embryonic stem (ES) cells by electroporation and clones were selected with neomycin and ganciclovir and verified by Southern blot and PCR. Properly integrated ES clones were transiently transfected with a Cre-expressing plasmid. Clones were subsequently selected for a conditional floxed allele (SHIP^flox^) or null allele (SHIP^null^) using Southern blot and PCR. Appropriate ES clones were then injected into blastocysts to generate chimeric mice. The chimeric mice were then bred with C57BL/6 mice to achieve germline transmission. These mice were subsequently crossed with mice expressing Cre in the myeloid compartment (LysMcre; reference [@bib19]) to generate Cre^+^/null/flox mice. Mice were screened for respective genotype by PCR and SHIP protein expression using Western blot ([@bib21]) on equal numbers of spleen cells purified by MACS (Miltenyi Biotec) sorting according to protocol from the manufacturer. Relative expression of SHIP in macrophage and B cell populations (comparing wt/null with flox/null/cre) were estimated using Alpha imager software from Alpha Innotech Corp. Results and Discussion ====================== Mice deficient in the inhibitory signaling molecule SHIP display pleiotropic defects in macrophages, NK cells, and lymphocytes ([@bib18], [@bib22]). A prominent feature of these mice is their splenomegaly resulting from dysregulation of myeloid proliferation. As seen in [Fig. 1](#fig1){ref-type="fig"} Figure 1.SHIP-deficient mice lack MZBs and MZMOs are displaced to the red pulp. (a) FACS^®^ profiles of single cell suspensions from the spleen of SHIP-heterozygous (SHIP^+/−^) and -deficient (SHIP^−/−^) mice. MZBs were measured as the CD19^+^, CRI^high^, and CD23^low^ population. The numbers shown represent percent of CD19^+^ cells for the depicted gates as an average of five mice. Numbers for the follicular B cells are shown for comparison. (b) Representative immunohistochemical analysis of above listed mice. At least four serial sections from each mouse were stained for MOMA-1^+^ (blue, top) metallophilic macrophages or MARCO^+^ MZMOs (blue, bottom). Sections were also stained for B220 (brown) to show the positioning of the follicle. ×10. , SHIP-deficient mice also display a specific defect in the organization of the splenic follicle with the loss of MZBs measured as the CD21^high^/CD23^low^ population in FACS^®^ and in sections as the B220^+^ cells localizing peripherally to the MOMA-1^+^ cells ([Fig. 1](#fig1){ref-type="fig"}, a and b). In the SHIP-deficient mice the MARCO^+^ MZMO cells are no longer organized within the marginal zone and adjacent to the MOMA-1 macrophages but are redistributed to the red pulp, whereas MOMA-1^+^ metallophils remain unaffected ([Fig. 1](#fig1){ref-type="fig"} b). Because SHIP is expressed in most hematopoietic cells, including lymphoid and myeloid subsets, we determined if this marginal zone phenotype in SHIP-deficient mice was the result of primary macrophage dysregulation. A conditional disruption of SHIP was generated in which macrophages displayed an approximate \>90% reduction in SHIP expression whereas B cell expression was reduced by \<10% ([Fig. 2](#fig2){ref-type="fig"} Figure 2.Conditional targeting of SHIP in macrophages results in MZMO displacement and reduced numbers of MZBs. (a) A targeting construct covering exons 10 to 13 of SHIP, from EcoRI (E) to HindIII (H), was made. Boxes represent exons and triangles represent loxP sites flanking exons 10 to 11 and a neomycin resistance gene (neo). Properly integrated ES cell clones were transiently transfected with Cre recombinase to create conditional floxed (SHIP^flox^) or null (SHIP^null^) clones. These cells were subsequently used to create floxed (flox) and null mice, which were crossed to mice expressing Cre from a macrophage-specific lysosomal promoter (cre). (b) Western blot analysis of MAC1^+^ and CD19^+^ spleen cells (SPC) from WT, WT/null, null/null, LysM floxed (flox/null/cre), and relative spleen size of 6-wk-old WT/null and flox/null/cre SHIP mice. (c) FACS^®^ and histological profiles of single cell suspensions from the spleen of the conditionally targeted SHIP KO mice. MZBs were measured as the CD19^+^, CRI^high^, and CD23^low^ population. The numbers shown represent percent of CD19^+^ cells for the depicted gates as an average of five mice and the numbers for the follicular B cells are shown for comparison. For representative immunohistochemical analysis, at least four serial sections were stained for MOMA-1^+^ (blue, top) metallophilic macrophages or MARCO^+^ MZMOs (blue, bottom). Sections were also stained for B220 (brown) to show the positioning of the follicle. Refer to [Fig. 1](#fig1){ref-type="fig"} for SHIP^+/−^ and SHIP^−/−^ profiles. ×10. , a and b). This is consistent with the expression patterns of Cre recombinase, driven by the lysosyme promoter used ([@bib19]). The mice developed a splenomegaly at ∼5 wk of age ([Fig. 2](#fig2){ref-type="fig"} b), similar to that of complete SHIP deletion, thus implicating a primary macrophage defect as the cause for splenomegaly in SHIP^−/−^ mice ([@bib18]). In addition, the mice displayed essentially the same marginal zone phenotype with significantly reduced MZBs as defined by flow cytometry and reorganization of the MZMOs as observed by histological staining ([Fig. 2](#fig2){ref-type="fig"} c). To confirm that the SHIP phenotype is B cell nonautonomous and that SHIP-deficient B cells can give rise to MZB populations when WT MZMOs are available, we produced BM chimeras using SHIP-deficient BM combined with WT BM and injected these cells into irradiated WT recipients. In the resulting chimeric mice the SHIP-deficient and WT BMs contributed equally to the MZB population (unpublished data). In B cell lines it has been shown that SHIP functions as a negative regulator of cellular activation by regulating the association of the positive signaling kinase Btk with the membrane, thus raising the threshold required for stimulation ([@bib23]). It does so by hydrolyzing PIP~3~, the substrate for Btk association with the membrane, thereby reducing the ability of Btk to become membrane associated and activated ([@bib24]). Because both SHIP and Btk are expressed in macrophages and a link between these molecules had been suggested, we reasoned that the myeloid proliferation and MZMO phenotype leading to the loss of MZBs might be the result of inappropriate activation of Btk in macrophages of SHIP-deficient animals ([@bib25], [@bib26]). Disruption of Btk in macrophages may thus be sufficient to restore normal signaling thresholds in SHIP-deficient mice. Combining the SHIP deficiency with a Btk deficiency resulted in the restoration of both the normal marginal zone structure ([Fig. 3](#fig3){ref-type="fig"} Figure 3.SHIP and Btk interact in myeloid proliferation and activation. (a) FACS^®^ and histological profiles of single cell suspensions from the spleen of SHIP and Btk double KO mice (SHIP^−/−^/Btk^−^). MZBs were measured as the CD19^+^, CRI^high^, and CD23^low^ population. The numbers shown represent percent of CD19^+^ cells for the depicted gates as an average of four mice and the numbers for the follicular B cells are shown for comparison. For representative immunohistochemical analysis, at least four serial sections from were stained for MOMA-1^+^ (blue, top) metallophilic macrophages or MARCO^+^ MZMOs (blue, bottom). Sections were also stained for B220 (brown) to show the positioning of the follicle. ×10. (b) Relative spleen size of 5-wk-old heterozygous KO or double KO mice. a) and spleen size ([Fig. 3](#fig3){ref-type="fig"} b) indicating that Btk is an important target of SHIP in myeloid cells in vivo. Similarly, Btk deficiency counteracted the over responsiveness of myeloid progenitors to GM-CSF and M-CSF in SHIP-deficient mice (unpublished data). Both the dysregulation of myeloid proliferation and follicular architecture likely result from enhanced signaling through the Btk pathway in myeloid cells. Reversion of the MZB and myeloid phenotypes in SHIP^−/−^ mice by deletion of Btk suggests that Btk is the dominant Tec family member regulated by SHIP in these cells. The observation that other members of the family are expressed in macrophages and have been shown to be able to substitute for Btk both in vivo and in KO mice indicates a surprising degree of specificity to the SHIP inhibitory pathway ([@bib27]--[@bib29]). These results suggested that MZMOs might be critical to the organization of the white pulp nodule and localization of MZBs in this structure. To test this directly we exploited the observation that MZMOs can be ablated by their preferential ingestion of macrophage-depleting liposomes ([@bib30]). At a low concentration of these liposomes we could see preferential depletion of MARCO^+^ MZMOs as opposed to the adjacent MOMA-1 macrophages ([Fig. 4](#fig4){ref-type="fig"}) Figure 4.MARCO^+^ MZMOs are required for retention of MZBs. Representative immunohistochemical analysis and FACS^®^ profiles of spleens from at least four WT mice treated with liposomes or untreated op/op mice. WT mice were injected i.v. with 100 μl PBS containing liposomes or with liposomes containing clodronate at a 1:24 dilution where MZMOs were preferentially depleted. 48 h later serial spleen sections were stained for MOMA-1^+^ (blue, top) metallophilic macrophages or MARCO^+^ (blue, middle) MZMOs. The sections were also stained for B220 (brown) to see the positioning of these populations in relation to the B cell follicle. ×10. Spleen cells were analyzed by FACS^®^ analysis for detection of MZBs as measured by the CD19^+^, CRI^high^, and CD23^low^ population. Numbers shown are the average percent-positive cells of four mice. Similar profiles are shown for untreated *op/op* mice (right). Data shown are representative of three independent experiments. . Other phagocytic cells in the spleen, such as red pulp macrophages and dendritic cells were largely unaffected by this treatment (not depicted). When MZMOs were depleted in this fashion, we observe a specific reduction in the MZBs by both flow cytometry and histological staining. In contrast, MOMA-1 macrophages are specifically absent in the CSF-1--deficient strain *op/op* but these mice retain MARCO^+^/ER-TR9^−^ MZMOs ([@bib31], [@bib32]). The absence of the MOMA-1^+^ cells and the ER-TR9 marker did not result in reduction in MZBs, but rather, an expansion of these cells is observed, indicating that the macrophage population that is required for MZB retention are the MARCO^+^ MZMOs. The identity of the retention signal expressed by MARCO^+^ MZMO cells was next determined by investigating the role of specific surface receptors on the MZMO in maintaining the marginal zone structure. The MARCO receptor, in addition to binding to bacteria ([@bib33]), contains an SRCR domain that has been implicated in binding to CD19^+^ lymphocytes ([@bib34], [@bib35]). To determine if MARCO itself is capable of binding to MZBs, we expressed the extracellular domains of MARCO as a soluble molecule ([@bib20]) and used it to stain splenic populations ([Fig. 5](#fig5){ref-type="fig"}) Figure 5.Soluble MARCO receptor (sMARCO) binds preferentially to MZBs. Representative FACS^®^ analysis of spleen cells from WT mice stained with CRI, CD23, and biotinylated sMARCO. Binding of sMARCO to different spleen cell populations was based on gates set on the CRI versus CD23 stain. red, MZBs; blue, follicular B cells; black, non-B cells. The histogram (bottom) shows the mean fluorescence index (MFI) and SD (*n* = 5) for the different populations as well as the avidin (Av) control and block using the MARCO-specific ED31 Ab. Data shown are representative of three independent experiments. . Three populations of cells were distinguished by flow cytometry when stained with CD21 and CD23. Maximal binding to soluble MARCO was observed for the MZBs (CD21^hi^ CD23^low^), whereas the follicular B cells (CD21^low^ CD23^hi^) displayed reduced binding. None of the other splenic populations (T cells, macrophages, or dendritic cells) were capable of binding to soluble MARCO. This binding was specific for the MARCO SRCR domain, as determined by the ability of a monoclonal Ab to this domain (ED31; reference [@bib33]) to block the binding of soluble MARCO to MZBs. When the MARCO-specific Ab was injected i.v. to WT mice it resulted in disruption of the marginal zone structure in which MZBs, identified by CD1d staining, were found in the follicular region whereas MZMOs, identified by ER-TR9 staining, were retained in the marginal zone ([Fig. 6](#fig6){ref-type="fig"}) Figure 6.In vivo disruption of MARCO and MZB interactions leads to MZB migration to the follicle. WT mice were given 100 μg control rat IgG or anti-MARCO (ED31) IgG i.v. 3 h later the mice were killed and the spleens were stained for macrophage and B cell populations. Representative stains of serial sections from at least four different mice are shown. MZMOs were detected with anti-MARCO (blue, top) or ER-TR9 (blue, middle) antibodies whereas metallophilic macrophages were stained with MOMA-1 (brown, bottom). B220^+^ B cells (brown) were stained for positioning of the follicle and MZBs as the CD1^high^ (blue, bottom) population. ×10. Part of the spleen was used for flow cytometric analysis to determine the CD19^+^, CRI^high^, and CD23^low^ populations. Numbers shown are the average of four mice. The percent of CD19^+^ cells for either MZBs or follicular B cells is shown for comparison. Data shown are representative of two independent experiments. . These results suggest that a direct interaction between MZMO and MZBs is mediated by MARCO--MZB binding, through a MARCO ligand expressed on these B cells, and provides a mechanism for the retention of MZBs by MARCO-expressing MZMO cells. Perturbation of this interaction either by disruption of adhesion and/or induction of macrophage activation by MARCO cross-linking results in the appearance of cells expressing a MZB surface phenotype in the follicular zone. To address the relevance of the MARCO^+^ MZMO and its retention of MZBs to its contribution to the development of an immune response to pathogens, we injected mice i.v. with rhodamine-conjugated *S. aureus*, which is a known ligand for the MARCO receptor ([@bib12]). Within 30 min of injection bacteria were visualized exclusively bound to the MZMO cells, a role consistent with the phagocytic property of these scavenger receptor--expressing cells ([Fig. 7](#fig7){ref-type="fig"}) Figure 7.*S. aureus* induce MZMO movement and displacement of MZBs. WT mice were injected i.v. with 250 μg heat-killed and rhodamine-conjugated *S. aureus* in PBS. 0.5 or 18 h later the mice were killed and the spleens were sectioned and stained. Representative stains from at least four mice are shown. MARCO^+^ MZMOs (left) are stained blue and B220^+^ B cells are stained brown. The middle shows the same stains as in the left, merged with the fluorescent stain of *S. aureus.* The right shows stains for the CD1^high^ MZB population (blue) and MOMA-1^+^ metallophilic macrophages (brown). ×10. The data shown are representative of two independent experiments. . 18 h after injection the microbes and the MZMO were found to have comigrated into the red pulp and cells with a MZB phenotype (CD1d^high^) were mostly found in the follicular region. These results are consistent with a model in which interaction of *S. aureus* with MARCO on MZMOs results in their migration into the red pulp and the concomitant migration of MZBs into the follicular region as has been reported for LPS and *E. coli* ([@bib8], [@bib9]). The deletion of the inhibitory signaling molecule SHIP results in a similar MZMO migration response, suggesting that MZMO activation can trigger migration into the red pulp. We presume that the likely explanation for the migration seen in response to *S. aureus* ingestion is the activation of MZMOs by their encounter with these bacteria as has been described ([@bib36], [@bib37]). A similar result was observed for *E. coli* suggesting a more general migratory response by MZMO cells to microbial challenge (unpublished data). The migratory response of the MZMO, carrying Ag to the red pulp, could simply be a method of clearance of particulate Ags or alternatively MZMOs could function as Ag transporters/presenters and supporters of plasmablast formation shown to take place in the red pulp ([Fig. 8](#fig8){ref-type="fig"} Figure 8.Proposed model for interactions between MZMO and MZB and the response of these cells to blood-borne pathogens. In the marginal zone (MZ), MZBs interact with the MZMO via the MARCO receptor (a) and with stromal elements via the ICAM/VCAM and their respective ligands LFA-1 and α4β1 (b). Upon phagocytosis of particulate Ags, the MARCO^+^ MZMOs migrate to the red pulp (c) and the majority of the MZBs migrate to the follicle where they interact with cells such as dendritic and follicular dendritic (d, DC and FDC). In the early response to T cell--independent Ags, the MZB also has the capacity to migrate to the red pulp to take part in plasma cell formation (e), where a possible interaction with MZMOs and MZBs may take place. ; references [@bib38]--[@bib40]). This has previously been reported to be a function of dendritic cells in the T/B cell border of the follicle and by macrophages supporting B1 B cells in the peritoneum ([@bib10]). Interestingly, Kang et al. ([@bib14]) recently showed that phagosomes in MZMOs, after uptake of dextran polysaccarides via SIGN-RI did not stain positive for the endosomal markers LAMP-1 and transferrin. This suggests that Ags taken up by MZMOs may not necessarily take the route of normal phagosome maturation ([@bib41]) resulting in destruction or Ag presentation and thus could provide a mechanism to transport intact Ag to the red pulp by MZMOs. These results suggest that the interaction of MZMO cells with MZBs is required to maintain the marginal zone structure and that this association is perturbed upon MZMO binding and activation by microbial pathogens. It is likely that the MZBs migrate into the follicular zone in response to CXCL13 ([@bib9]) in the absence of retention signals from the MARCO^+^ MZMO. This pathway is likely to be independent of the integrin pathway involving stromal VCAM/ICAM and B cell LFA-1/α4β1 because disruption of that pathway with antibodies to LFA-1 and α4β1 results in the release of MZBs to the blood stream ([@bib9]), not their migration into the follicle, in contrast to the results presented here ([Fig. 8](#fig8){ref-type="fig"}). In addition, we see no effect on the localization of MZMO cells using antibodies to the stromal integrins, nor do we observe effects on their ligand expression when MZMO cells are triggered to migrate (unpublished data). These pathways are thus likely to serve different functions in the organization of the marginal zone, with the MZMO pathway specific for the antimicrobial response, leading to internalization of the organism and trafficking of B cells into the follicular zone to propagate the immune responses. MZBs have the capacity to bind polysaccharide Ags through complement-mediated pathways and transport these to the follicular area of the spleen ([@bib6], [@bib8], [@bib42]). The events we have described appear to be another mechanism for delivery of MZBs and Ag to the T cell--rich follicular region. MZBs have mostly been implicated in the response to T cell--independent Ags, however, they are also capable of presenting Ags ([@bib43]) and may thus be important both for the T cell--dependent and --independent phase of the earliest defense against a pathogen. We would like to thank members of the Ravetch and Steinman labs at The Rockefeller University, especially Pierre Bruhns, Patrick Smith, Maggi Pack, Chae Gyu Park, and Sayori Yamazaki for technical assistance and comments on the manuscript. We also thank Dr. Jeffrey Pollard for op/op mice and Dr. Timo Pikkarainen for reagents and helpful comments. This work was supported by the Swedish Cancer Society and the NIH. *Abbreviations used in this paper:* Btk, Bruton\'s tyrosine kinase; ES, embryonic stem; MZB, marginal zone B cell; MZMO, marginal zone macrophage; SHIP, SH2-containing inositol-5-phosphatase 1.
3
1.414132
0.922628
Q: How can I Check the current value which is already passed or not in an array in nested foreach in php My array $key1=> Array ( [0] => 1 [1] => 2 [2] => 7 [3] => 11 [4] => 12 [5] => 17 [6] => 18 ) $_POST['name']=> Array ( [0] => General [1] => General [2] => Outdoors [3] => Dining [4] => Kitchen ) Here is my code, foreach ($key1 as $key => $value) { // echo $value; foreach ($_POST['name'] as $key => $value1) { //echo $value; $subQueryCond .=' AND '.$value1.' LIKE ' .$value ; } } While my Ajax calls this nested loop occurs.. Inside this I wrote a query.. If one value is passed. The query is in the format of AND 'General' LIKE 1. And if another value is passed in the $key1 it pass the query two times. It's like How many arrays are given that much time that query was passed.. So,here I would like to restrict the $value if it already came.. if two values were given,it pass the query in the following manner AND General LIKE 1 AND Outdoors LIKE 1 AND General LIKE 7 AND Outdoors LIKE 7 And my desired query must be in the form of AND General LIKE 1 AND General LIKE 7 AND Outdoors LIKE 7 can someone help me.. A: This will work for you... <?php $subQueryCond= ''; foreach ($key1 as $key => $value) { foreach ($_POST['name'] as $key => $value1) { $subQueryCond['AND '.$value1.' LIKE ' .$value] = ' AND '.$value1.' LIKE ' .$value ; } } echo "<pre>"; print_r($subQueryCond); $query = implode('',$subQueryCond) ; print_r($query); ?> just make an array with unique keys to value, then use implode() function to make query string...
1
0.886532
0.066307
Safety of union home care aides in Washington State. A rate-based understanding of home care aides' adverse occupational outcomes related to their work location and care tasks is lacking. Within a 30-month, dynamic cohort of 43 394 home care aides in Washington State, injury rates were calculated by aides' demographic and work characteristics. Injury narratives and focus groups provided contextual detail. Injury rates were higher for home care aides categorized as female, white, 50 to <65 years old, less experienced, with a primary language of English, and working through an agency (versus individual providers). In addition to direct occupational hazards, variability in workload, income, and supervisory/social support is of concern. Policies should address the roles and training of home care aides, consumers, and managers/supervisors. Home care aides' improved access to often-existing resources to identify, manage, and eliminate occupational hazards is called for to prevent injuries and address concerns related to the vulnerability of this needed workforce.
2
1.484553
0.395592
{-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE Safe #-} {-# LANGUAGE Strict #-} {-# LANGUAGE TupleSections #-} {-# LANGUAGE TypeFamilies #-} -- | -- -- This module implements a transformation from source to core -- Futhark. module Futhark.Internalise (internaliseProg) where import Control.Monad.Reader import Data.Bitraversable import Data.List (find, intercalate, intersperse, nub, transpose) import qualified Data.List.NonEmpty as NE import qualified Data.Map.Strict as M import qualified Data.Set as S import Futhark.IR.SOACS as I hiding (stmPattern) import Futhark.Internalise.AccurateSizes import Futhark.Internalise.Bindings import Futhark.Internalise.Defunctionalise as Defunctionalise import Futhark.Internalise.Defunctorise as Defunctorise import Futhark.Internalise.Lambdas import Futhark.Internalise.Monad as I import Futhark.Internalise.Monomorphise as Monomorphise import Futhark.Internalise.TypesValues import Futhark.Transform.Rename as I import Futhark.Util (splitAt3) import Language.Futhark as E hiding (TypeArg) import Language.Futhark.Semantic (Imports) -- | Convert a program in source Futhark to a program in the Futhark -- core language. internaliseProg :: MonadFreshNames m => Bool -> Imports -> m (I.Prog SOACS) internaliseProg always_safe prog = do prog_decs <- Defunctorise.transformProg prog prog_decs' <- Monomorphise.transformProg prog_decs prog_decs'' <- Defunctionalise.transformProg prog_decs' (consts, funs) <- runInternaliseM always_safe (internaliseValBinds prog_decs'') I.renameProg $ I.Prog consts funs internaliseAttr :: E.AttrInfo -> Attr internaliseAttr (E.AttrAtom v) = I.AttrAtom v internaliseAttr (E.AttrComp f attrs) = I.AttrComp f $ map internaliseAttr attrs internaliseAttrs :: [E.AttrInfo] -> Attrs internaliseAttrs = mconcat . map (oneAttr . internaliseAttr) internaliseValBinds :: [E.ValBind] -> InternaliseM () internaliseValBinds = mapM_ internaliseValBind internaliseFunName :: VName -> [E.Pattern] -> InternaliseM Name internaliseFunName ofname [] = return $ nameFromString $ pretty ofname ++ "f" internaliseFunName ofname _ = do info <- lookupFunction' ofname -- In some rare cases involving local functions, the same function -- name may be re-used in multiple places. We check whether the -- function name has already been used, and generate a new one if -- so. case info of Just _ -> nameFromString . pretty <$> newNameFromString (baseString ofname) Nothing -> return $ nameFromString $ pretty ofname internaliseValBind :: E.ValBind -> InternaliseM () internaliseValBind fb@(E.ValBind entry fname retdecl (Info (rettype, _)) tparams params body _ attrs loc) = do localConstsScope $ bindingParams tparams params $ \shapeparams params' -> do let shapenames = map I.paramName shapeparams normal_params = shapenames ++ map I.paramName (concat params') normal_param_names = namesFromList normal_params fname' <- internaliseFunName fname params msg <- case retdecl of Just dt -> errorMsg . ("Function return value does not match shape of type " :) <$> typeExpForError dt Nothing -> return $ errorMsg ["Function return value does not match shape of declared return type."] ((rettype', body_res), body_stms) <- collectStms $ do body_res <- internaliseExp "res" body rettype_bad <- internaliseReturnType rettype let rettype' = zeroExts rettype_bad return (rettype', body_res) body' <- ensureResultExtShape msg loc (map I.fromDecl rettype') $ mkBody body_stms body_res constants <- allConsts let free_in_fun = freeIn body' `namesSubtract` normal_param_names `namesSubtract` constants used_free_params <- forM (namesToList free_in_fun) $ \v -> do v_t <- lookupType v return $ Param v $ toDecl v_t Nonunique let free_shape_params = map (`Param` I.Prim int32) $ concatMap (I.shapeVars . I.arrayShape . I.paramType) used_free_params free_params = nub $ free_shape_params ++ used_free_params all_params = free_params ++ shapeparams ++ concat params' let fd = I.FunDef Nothing (internaliseAttrs attrs) fname' rettype' all_params body' if null params' then bindConstant fname fd else bindFunction fname fd ( fname', map I.paramName free_params, shapenames, map declTypeOf $ concat params', all_params, applyRetType rettype' all_params ) case entry of Just (Info entry') -> generateEntryPoint entry' fb Nothing -> return () where zeroExts ts = generaliseExtTypes ts ts allDimsFreshInType :: MonadFreshNames m => E.PatternType -> m E.PatternType allDimsFreshInType = bitraverse onDim pure where onDim (E.NamedDim v) = E.NamedDim . E.qualName <$> newVName (baseString $ E.qualLeaf v) onDim _ = E.NamedDim . E.qualName <$> newVName "size" -- | Replace all named dimensions with a fresh name, and remove all -- constant dimensions. The point is to remove the constraints, but -- keep the names around. We use this for constructing the entry -- point parameters. allDimsFreshInPat :: MonadFreshNames m => E.Pattern -> m E.Pattern allDimsFreshInPat (PatternAscription p _ _) = allDimsFreshInPat p allDimsFreshInPat (PatternParens p _) = allDimsFreshInPat p allDimsFreshInPat (Id v (Info t) loc) = Id v <$> (Info <$> allDimsFreshInType t) <*> pure loc allDimsFreshInPat (TuplePattern ps loc) = TuplePattern <$> mapM allDimsFreshInPat ps <*> pure loc allDimsFreshInPat (RecordPattern ps loc) = RecordPattern <$> mapM (traverse allDimsFreshInPat) ps <*> pure loc allDimsFreshInPat (Wildcard (Info t) loc) = Wildcard <$> (Info <$> allDimsFreshInType t) <*> pure loc allDimsFreshInPat (PatternLit e (Info t) loc) = PatternLit e <$> (Info <$> allDimsFreshInType t) <*> pure loc allDimsFreshInPat (PatternConstr c (Info t) pats loc) = PatternConstr c <$> (Info <$> allDimsFreshInType t) <*> mapM allDimsFreshInPat pats <*> pure loc generateEntryPoint :: E.EntryPoint -> E.ValBind -> InternaliseM () generateEntryPoint (E.EntryPoint e_paramts e_rettype) vb = localConstsScope $ do let (E.ValBind _ ofname _ (Info (rettype, _)) _ params _ _ attrs loc) = vb -- We replace all shape annotations, so there should be no constant -- parameters here. params_fresh <- mapM allDimsFreshInPat params let tparams = map (`E.TypeParamDim` mempty) $ S.toList $ mconcat $ map E.patternDimNames params_fresh bindingParams tparams params_fresh $ \shapeparams params' -> do entry_rettype <- internaliseEntryReturnType $ anySizes rettype let entry' = entryPoint (zip e_paramts params') (e_rettype, entry_rettype) args = map (I.Var . I.paramName) $ concat params' entry_body <- insertStmsM $ do -- Special case the (rare) situation where the entry point is -- not a function. maybe_const <- lookupConst ofname vals <- case maybe_const of Just ses -> return ses Nothing -> fst <$> funcall "entry_result" (E.qualName ofname) args loc ctx <- extractShapeContext (concat entry_rettype) <$> mapM (fmap I.arrayDims . subExpType) vals resultBodyM (ctx ++ vals) addFunDef $ I.FunDef (Just entry') (internaliseAttrs attrs) (baseName ofname) (concat entry_rettype) (shapeparams ++ concat params') entry_body entryPoint :: [(E.EntryType, [I.FParam])] -> ( E.EntryType, [[I.TypeBase ExtShape Uniqueness]] ) -> I.EntryPoint entryPoint params (eret, crets) = ( concatMap (entryPointType . preParam) params, case ( isTupleRecord $ entryType eret, entryAscribed eret ) of (Just ts, Just (E.TETuple e_ts _)) -> concatMap entryPointType $ zip (zipWith E.EntryType ts (map Just e_ts)) crets (Just ts, Nothing) -> concatMap entryPointType $ zip (map (`E.EntryType` Nothing) ts) crets _ -> entryPointType (eret, concat crets) ) where preParam (e_t, ps) = (e_t, staticShapes $ map I.paramDeclType ps) entryPointType (t, ts) | E.Scalar (E.Prim E.Unsigned {}) <- E.entryType t = [I.TypeUnsigned] | E.Array _ _ (E.Prim E.Unsigned {}) _ <- E.entryType t = [I.TypeUnsigned] | E.Scalar E.Prim {} <- E.entryType t = [I.TypeDirect] | E.Array _ _ E.Prim {} _ <- E.entryType t = [I.TypeDirect] | otherwise = [I.TypeOpaque desc $ length ts] where desc = maybe (pretty t') typeExpOpaqueName $ E.entryAscribed t t' = noSizes (E.entryType t) `E.setUniqueness` Nonunique typeExpOpaqueName (TEApply te TypeArgExpDim {} _) = typeExpOpaqueName te typeExpOpaqueName (TEArray te _ _) = let (d, te') = withoutDims te in "arr_" ++ typeExpOpaqueName te' ++ "_" ++ show (1 + d) ++ "d" typeExpOpaqueName te = pretty te withoutDims (TEArray te _ _) = let (d, te') = withoutDims te in (d + 1, te') withoutDims te = (0 :: Int, te) internaliseIdent :: E.Ident -> InternaliseM I.VName internaliseIdent (E.Ident name (Info tp) loc) = case tp of E.Scalar E.Prim {} -> return name _ -> error $ "Futhark.Internalise.internaliseIdent: asked to internalise non-prim-typed ident '" ++ pretty name ++ " of type " ++ pretty tp ++ " at " ++ locStr loc ++ "." internaliseBody :: E.Exp -> InternaliseM Body internaliseBody e = insertStmsM $ resultBody <$> internaliseExp "res" e bodyFromStms :: InternaliseM (Result, a) -> InternaliseM (Body, a) bodyFromStms m = do ((res, a), stms) <- collectStms m (,a) <$> mkBodyM stms res internaliseExp :: String -> E.Exp -> InternaliseM [I.SubExp] internaliseExp desc (E.Parens e _) = internaliseExp desc e internaliseExp desc (E.QualParens _ e _) = internaliseExp desc e internaliseExp desc (E.StringLit vs _) = fmap pure $ letSubExp desc $ I.BasicOp $ I.ArrayLit (map constant vs) $ I.Prim int8 internaliseExp _ (E.Var (E.QualName _ name) (Info t) loc) = do subst <- lookupSubst name case subst of Just substs -> return substs Nothing -> do -- If this identifier is the name of a constant, we have to turn it -- into a call to the corresponding function. is_const <- lookupConst name case is_const of Just ses -> return ses Nothing -> (: []) . I.Var <$> internaliseIdent (E.Ident name (Info t) loc) internaliseExp desc (E.Index e idxs (Info ret, Info retext) loc) = do vs <- internaliseExpToVars "indexed" e dims <- case vs of [] -> return [] -- Will this happen? v : _ -> I.arrayDims <$> lookupType v (idxs', cs) <- internaliseSlice loc dims idxs let index v = do v_t <- lookupType v return $ I.BasicOp $ I.Index v $ fullSlice v_t idxs' ses <- certifying cs $ letSubExps desc =<< mapM index vs bindExtSizes (E.toStruct ret) retext ses return ses -- XXX: we map empty records and tuples to bools, because otherwise -- arrays of unit will lose their sizes. internaliseExp _ (E.TupLit [] _) = return [constant True] internaliseExp _ (E.RecordLit [] _) = return [constant True] internaliseExp desc (E.TupLit es _) = concat <$> mapM (internaliseExp desc) es internaliseExp desc (E.RecordLit orig_fields _) = concatMap snd . sortFields . M.unions <$> mapM internaliseField orig_fields where internaliseField (E.RecordFieldExplicit name e _) = M.singleton name <$> internaliseExp desc e internaliseField (E.RecordFieldImplicit name t loc) = internaliseField $ E.RecordFieldExplicit (baseName name) (E.Var (E.qualName name) t loc) loc internaliseExp desc (E.ArrayLit es (Info arr_t) loc) -- If this is a multidimensional array literal of primitives, we -- treat it specially by flattening it out followed by a reshape. -- This cuts down on the amount of statements that are produced, and -- thus allows us to efficiently handle huge array literals - a -- corner case, but an important one. | Just ((eshape, e') : es') <- mapM isArrayLiteral es, not $ null eshape, all ((eshape ==) . fst) es', Just basetype <- E.peelArray (length eshape) arr_t = do let flat_lit = E.ArrayLit (e' ++ concatMap snd es') (Info basetype) loc new_shape = length es : eshape flat_arrs <- internaliseExpToVars "flat_literal" flat_lit forM flat_arrs $ \flat_arr -> do flat_arr_t <- lookupType flat_arr let new_shape' = reshapeOuter (map (DimNew . intConst Int32 . toInteger) new_shape) 1 $ I.arrayShape flat_arr_t letSubExp desc $ I.BasicOp $ I.Reshape new_shape' flat_arr | otherwise = do es' <- mapM (internaliseExp "arr_elem") es arr_t_ext <- internaliseReturnType (E.toStruct arr_t) rowtypes <- case mapM (fmap rowType . hasStaticShape . I.fromDecl) arr_t_ext of Just ts -> pure ts Nothing -> -- XXX: the monomorphiser may create single-element array -- literals with an unknown row type. In those cases we -- need to look at the types of the actual elements. -- Fixing this in the monomorphiser is a lot more tricky -- than just working around it here. case es' of [] -> error $ "internaliseExp ArrayLit: existential type: " ++ pretty arr_t e' : _ -> mapM subExpType e' let arraylit ks rt = do ks' <- mapM ( ensureShape "shape of element differs from shape of first element" loc rt "elem_reshaped" ) ks return $ I.BasicOp $ I.ArrayLit ks' rt letSubExps desc =<< if null es' then mapM (arraylit []) rowtypes else zipWithM arraylit (transpose es') rowtypes where isArrayLiteral :: E.Exp -> Maybe ([Int], [E.Exp]) isArrayLiteral (E.ArrayLit inner_es _ _) = do (eshape, e) : inner_es' <- mapM isArrayLiteral inner_es guard $ all ((eshape ==) . fst) inner_es' return (length inner_es : eshape, e ++ concatMap snd inner_es') isArrayLiteral e = Just ([], [e]) internaliseExp desc (E.Range start maybe_second end (Info ret, Info retext) loc) = do start' <- internaliseExp1 "range_start" start end' <- internaliseExp1 "range_end" $ case end of DownToExclusive e -> e ToInclusive e -> e UpToExclusive e -> e maybe_second' <- traverse (internaliseExp1 "range_second") maybe_second -- Construct an error message in case the range is invalid. let conv = case E.typeOf start of E.Scalar (E.Prim (E.Unsigned _)) -> asIntS Int32 _ -> asIntS Int32 start'_i32 <- conv start' end'_i32 <- conv end' maybe_second'_i32 <- traverse conv maybe_second' let errmsg = errorMsg $ ["Range "] ++ [ErrorInt32 start'_i32] ++ ( case maybe_second'_i32 of Nothing -> [] Just second_i32 -> ["..", ErrorInt32 second_i32] ) ++ ( case end of DownToExclusive {} -> ["..>"] ToInclusive {} -> ["..."] UpToExclusive {} -> ["..<"] ) ++ [ErrorInt32 end'_i32, " is invalid."] (it, le_op, lt_op) <- case E.typeOf start of E.Scalar (E.Prim (E.Signed it)) -> return (it, CmpSle it, CmpSlt it) E.Scalar (E.Prim (E.Unsigned it)) -> return (it, CmpUle it, CmpUlt it) start_t -> error $ "Start value in range has type " ++ pretty start_t let one = intConst it 1 negone = intConst it (-1) default_step = case end of DownToExclusive {} -> negone ToInclusive {} -> one UpToExclusive {} -> one (step, step_zero) <- case maybe_second' of Just second' -> do subtracted_step <- letSubExp "subtracted_step" $ I.BasicOp $ I.BinOp (I.Sub it I.OverflowWrap) second' start' step_zero <- letSubExp "step_zero" $ I.BasicOp $ I.CmpOp (I.CmpEq $ IntType it) start' second' return (subtracted_step, step_zero) Nothing -> return (default_step, constant False) step_sign <- letSubExp "s_sign" $ BasicOp $ I.UnOp (I.SSignum it) step step_sign_i32 <- asIntS Int32 step_sign bounds_invalid_downwards <- letSubExp "bounds_invalid_downwards" $ I.BasicOp $ I.CmpOp le_op start' end' bounds_invalid_upwards <- letSubExp "bounds_invalid_upwards" $ I.BasicOp $ I.CmpOp lt_op end' start' (distance, step_wrong_dir, bounds_invalid) <- case end of DownToExclusive {} -> do step_wrong_dir <- letSubExp "step_wrong_dir" $ I.BasicOp $ I.CmpOp (I.CmpEq $ IntType it) step_sign one distance <- letSubExp "distance" $ I.BasicOp $ I.BinOp (Sub it I.OverflowWrap) start' end' distance_i32 <- asIntS Int32 distance return (distance_i32, step_wrong_dir, bounds_invalid_downwards) UpToExclusive {} -> do step_wrong_dir <- letSubExp "step_wrong_dir" $ I.BasicOp $ I.CmpOp (I.CmpEq $ IntType it) step_sign negone distance <- letSubExp "distance" $ I.BasicOp $ I.BinOp (Sub it I.OverflowWrap) end' start' distance_i32 <- asIntS Int32 distance return (distance_i32, step_wrong_dir, bounds_invalid_upwards) ToInclusive {} -> do downwards <- letSubExp "downwards" $ I.BasicOp $ I.CmpOp (I.CmpEq $ IntType it) step_sign negone distance_downwards_exclusive <- letSubExp "distance_downwards_exclusive" $ I.BasicOp $ I.BinOp (Sub it I.OverflowWrap) start' end' distance_upwards_exclusive <- letSubExp "distance_upwards_exclusive" $ I.BasicOp $ I.BinOp (Sub it I.OverflowWrap) end' start' bounds_invalid <- letSubExp "bounds_invalid" $ I.If downwards (resultBody [bounds_invalid_downwards]) (resultBody [bounds_invalid_upwards]) $ ifCommon [I.Prim I.Bool] distance_exclusive <- letSubExp "distance_exclusive" $ I.If downwards (resultBody [distance_downwards_exclusive]) (resultBody [distance_upwards_exclusive]) $ ifCommon [I.Prim $ IntType it] distance_exclusive_i32 <- asIntS Int32 distance_exclusive distance <- letSubExp "distance" $ I.BasicOp $ I.BinOp (Add Int32 I.OverflowWrap) distance_exclusive_i32 (intConst Int32 1) return (distance, constant False, bounds_invalid) step_invalid <- letSubExp "step_invalid" $ I.BasicOp $ I.BinOp I.LogOr step_wrong_dir step_zero invalid <- letSubExp "range_invalid" $ I.BasicOp $ I.BinOp I.LogOr step_invalid bounds_invalid valid <- letSubExp "valid" $ I.BasicOp $ I.UnOp I.Not invalid cs <- assert "range_valid_c" valid errmsg loc step_i32 <- asIntS Int32 step pos_step <- letSubExp "pos_step" $ I.BasicOp $ I.BinOp (Mul Int32 I.OverflowWrap) step_i32 step_sign_i32 num_elems <- certifying cs $ letSubExp "num_elems" $ I.BasicOp $ I.BinOp (SDivUp Int32 I.Unsafe) distance pos_step se <- letSubExp desc (I.BasicOp $ I.Iota num_elems start' step it) bindExtSizes (E.toStruct ret) retext [se] return [se] internaliseExp desc (E.Ascript e _ _) = internaliseExp desc e internaliseExp desc (E.Coerce e (TypeDecl dt (Info et)) (Info ret, Info retext) loc) = do ses <- internaliseExp desc e ts <- internaliseReturnType et dt' <- typeExpForError dt bindExtSizes (E.toStruct ret) retext ses forM (zip ses ts) $ \(e', t') -> do dims <- arrayDims <$> subExpType e' let parts = ["Value of (core language) shape ("] ++ intersperse ", " (map ErrorInt32 dims) ++ [") cannot match shape of type `"] ++ dt' ++ ["`."] ensureExtShape (errorMsg parts) loc (I.fromDecl t') desc e' internaliseExp desc (E.Negate e _) = do e' <- internaliseExp1 "negate_arg" e et <- subExpType e' case et of I.Prim (I.IntType t) -> letTupExp' desc $ I.BasicOp $ I.BinOp (I.Sub t I.OverflowWrap) (I.intConst t 0) e' I.Prim (I.FloatType t) -> letTupExp' desc $ I.BasicOp $ I.BinOp (I.FSub t) (I.floatConst t 0) e' _ -> error "Futhark.Internalise.internaliseExp: non-numeric type in Negate" internaliseExp desc [email protected] {} = do (qfname, args, ret, retext) <- findFuncall e -- Argument evaluation is outermost-in so that any existential sizes -- created by function applications can be brought into scope. let fname = nameFromString $ pretty $ baseName $ qualLeaf qfname loc = srclocOf e arg_desc = nameToString fname ++ "_arg" -- Some functions are magical (overloaded) and we handle that here. ses <- case () of -- Overloaded functions never take array arguments (except -- equality, but those cannot be existential), so we can safely -- ignore the existential dimensions. () | Just internalise <- isOverloadedFunction qfname (map fst args) loc -> internalise desc | Just (rettype, _) <- M.lookup fname I.builtInFunctions -> do let tag ses = [(se, I.Observe) | se <- ses] args' <- reverse <$> mapM (internaliseArg arg_desc) (reverse args) let args'' = concatMap tag args' letTupExp' desc $ I.Apply fname args'' [I.Prim rettype] (Safe, loc, []) | otherwise -> do args' <- concat . reverse <$> mapM (internaliseArg arg_desc) (reverse args) fst <$> funcall desc qfname args' loc bindExtSizes ret retext ses return ses internaliseExp desc (E.LetPat pat e body (Info ret, Info retext) _) = do ses <- internalisePat desc pat e body (internaliseExp desc) bindExtSizes (E.toStruct ret) retext ses return ses internaliseExp desc (E.LetFun ofname (tparams, params, retdecl, Info rettype, body) letbody _ loc) = do internaliseValBind $ E.ValBind Nothing ofname retdecl (Info (rettype, [])) tparams params body Nothing mempty loc internaliseExp desc letbody internaliseExp desc (E.DoLoop sparams mergepat mergeexp form loopbody (Info (ret, retext)) loc) = do ses <- internaliseExp "loop_init" mergeexp ((loopbody', (form', shapepat, mergepat', mergeinit')), initstms) <- collectStms $ handleForm ses form addStms initstms mergeinit_ts' <- mapM subExpType mergeinit' ctxinit <- argShapes (map I.paramName shapepat) mergepat' mergeinit_ts' let ctxmerge = zip shapepat ctxinit valmerge = zip mergepat' mergeinit' dropCond = case form of E.While {} -> drop 1 _ -> id -- Ensure that the result of the loop matches the shapes of the -- merge parameters. XXX: Ideally they should already match (by -- the source language type rules), but some of our -- transformations (esp. defunctionalisation) strips out some size -- information. For a type-correct source program, these reshapes -- should simplify away. let merge = ctxmerge ++ valmerge merge_ts = map (I.paramType . fst) merge loopbody'' <- localScope (scopeOfFParams $ map fst merge) $ inScopeOf form' $ insertStmsM $ resultBodyM =<< ensureArgShapes "shape of loop result does not match shapes in loop parameter" loc (map (I.paramName . fst) ctxmerge) merge_ts =<< bodyBind loopbody' attrs <- asks envAttrs loop_res <- map I.Var . dropCond <$> attributing attrs (letTupExp desc (I.DoLoop ctxmerge valmerge form' loopbody'')) bindExtSizes (E.toStruct ret) retext loop_res return loop_res where sparams' = map (`TypeParamDim` mempty) sparams forLoop mergepat' shapepat mergeinit form' = bodyFromStms $ inScopeOf form' $ do ses <- internaliseExp "loopres" loopbody sets <- mapM subExpType ses shapeargs <- argShapes (map I.paramName shapepat) mergepat' sets return ( shapeargs ++ ses, ( form', shapepat, mergepat', mergeinit ) ) handleForm mergeinit (E.ForIn x arr) = do arr' <- internaliseExpToVars "for_in_arr" arr arr_ts <- mapM lookupType arr' let w = arraysSize 0 arr_ts i <- newVName "i" bindingLoopParams sparams' mergepat $ \shapepat mergepat' -> bindingLambdaParams [x] (map rowType arr_ts) $ \x_params -> do let loopvars = zip x_params arr' forLoop mergepat' shapepat mergeinit $ I.ForLoop i Int32 w loopvars handleForm mergeinit (E.For i num_iterations) = do num_iterations' <- internaliseExp1 "upper_bound" num_iterations i' <- internaliseIdent i num_iterations_t <- I.subExpType num_iterations' it <- case num_iterations_t of I.Prim (IntType it) -> return it _ -> error "internaliseExp DoLoop: invalid type" bindingLoopParams sparams' mergepat $ \shapepat mergepat' -> forLoop mergepat' shapepat mergeinit $ I.ForLoop i' it num_iterations' [] handleForm mergeinit (E.While cond) = bindingLoopParams sparams' mergepat $ \shapepat mergepat' -> do mergeinit_ts <- mapM subExpType mergeinit -- We need to insert 'cond' twice - once for the initial -- condition (do we enter the loop at all?), and once with the -- result values of the loop (do we continue into the next -- iteration?). This is safe, as the type rules for the -- external language guarantees that 'cond' does not consume -- anything. shapeinit <- argShapes (map I.paramName shapepat) mergepat' mergeinit_ts (loop_initial_cond, init_loop_cond_bnds) <- collectStms $ do forM_ (zip shapepat shapeinit) $ \(p, se) -> letBindNames [paramName p] $ BasicOp $ SubExp se forM_ (zip mergepat' mergeinit) $ \(p, se) -> unless (se == I.Var (paramName p)) $ letBindNames [paramName p] $ BasicOp $ case se of I.Var v | not $ primType $ paramType p -> Reshape (map DimCoercion $ arrayDims $ paramType p) v _ -> SubExp se internaliseExp1 "loop_cond" cond addStms init_loop_cond_bnds bodyFromStms $ do ses <- internaliseExp "loopres" loopbody sets <- mapM subExpType ses loop_while <- newParam "loop_while" $ I.Prim I.Bool shapeargs <- argShapes (map I.paramName shapepat) mergepat' sets -- Careful not to clobber anything. loop_end_cond_body <- renameBody <=< insertStmsM $ do forM_ (zip shapepat shapeargs) $ \(p, se) -> unless (se == I.Var (paramName p)) $ letBindNames [paramName p] $ BasicOp $ SubExp se forM_ (zip mergepat' ses) $ \(p, se) -> unless (se == I.Var (paramName p)) $ letBindNames [paramName p] $ BasicOp $ case se of I.Var v | not $ primType $ paramType p -> Reshape (map DimCoercion $ arrayDims $ paramType p) v _ -> SubExp se resultBody <$> internaliseExp "loop_cond" cond loop_end_cond <- bodyBind loop_end_cond_body return ( shapeargs ++ loop_end_cond ++ ses, ( I.WhileLoop $ I.paramName loop_while, shapepat, loop_while : mergepat', loop_initial_cond : mergeinit ) ) internaliseExp desc (E.LetWith name src idxs ve body t loc) = do let pat = E.Id (E.identName name) (E.identType name) loc src_t = E.fromStruct <$> E.identType src e = E.Update (E.Var (E.qualName $ E.identName src) src_t loc) idxs ve loc internaliseExp desc $ E.LetPat pat e body (t, Info []) loc internaliseExp desc (E.Update src slice ve loc) = do ves <- internaliseExp "lw_val" ve srcs <- internaliseExpToVars "src" src dims <- case srcs of [] -> return [] -- Will this happen? v : _ -> I.arrayDims <$> lookupType v (idxs', cs) <- internaliseSlice loc dims slice let comb sname ve' = do sname_t <- lookupType sname let full_slice = fullSlice sname_t idxs' rowtype = sname_t `setArrayDims` sliceDims full_slice ve'' <- ensureShape "shape of value does not match shape of source array" loc rowtype "lw_val_correct_shape" ve' letInPlace desc sname full_slice $ BasicOp $ SubExp ve'' certifying cs $ map I.Var <$> zipWithM comb srcs ves internaliseExp desc (E.RecordUpdate src fields ve _ _) = do src' <- internaliseExp desc src ve' <- internaliseExp desc ve replace (E.typeOf src `setAliases` ()) fields ve' src' where replace (E.Scalar (E.Record m)) (f : fs) ve' src' | Just t <- M.lookup f m = do i <- fmap sum $ mapM (internalisedTypeSize . snd) $ takeWhile ((/= f) . fst) $ sortFields m k <- internalisedTypeSize t let (bef, to_update, aft) = splitAt3 i k src' src'' <- replace t fs ve' to_update return $ bef ++ src'' ++ aft replace _ _ ve' _ = return ve' internaliseExp desc (E.Attr attr e _) = local f $ internaliseExp desc e where attrs = oneAttr $ internaliseAttr attr f env | "unsafe" `inAttrs` attrs, not $ envSafe env = env {envDoBoundsChecks = False} | otherwise = env {envAttrs = envAttrs env <> attrs} internaliseExp desc (E.Assert e1 e2 (Info check) loc) = do e1' <- internaliseExp1 "assert_cond" e1 c <- assert "assert_c" e1' (errorMsg [ErrorString $ "Assertion is false: " <> check]) loc -- Make sure there are some bindings to certify. certifying c $ mapM rebind =<< internaliseExp desc e2 where rebind v = do v' <- newVName "assert_res" letBindNames [v'] $ I.BasicOp $ I.SubExp v return $ I.Var v' internaliseExp _ (E.Constr c es (Info (E.Scalar (E.Sum fs))) _) = do (ts, constr_map) <- internaliseSumType $ M.map (map E.toStruct) fs es' <- concat <$> mapM (internaliseExp "payload") es let noExt _ = return $ intConst Int32 0 ts' <- instantiateShapes noExt $ map fromDecl ts case M.lookup c constr_map of Just (i, js) -> (intConst Int8 (toInteger i) :) <$> clauses 0 ts' (zip js es') Nothing -> error "internaliseExp Constr: missing constructor" where clauses j (t : ts) js_to_es | Just e <- j `lookup` js_to_es = (e :) <$> clauses (j + 1) ts js_to_es | otherwise = do blank <- letSubExp "zero" =<< eBlank t (blank :) <$> clauses (j + 1) ts js_to_es clauses _ [] _ = return [] internaliseExp _ (E.Constr _ _ (Info t) loc) = error $ "internaliseExp: constructor with type " ++ pretty t ++ " at " ++ locStr loc internaliseExp desc (E.Match e cs (Info ret, Info retext) _) = do ses <- internaliseExp (desc ++ "_scrutinee") e res <- case NE.uncons cs of (CasePat pCase eCase _, Nothing) -> do (_, pertinent) <- generateCond pCase ses internalisePat' pCase pertinent eCase (internaliseExp desc) (c, Just cs') -> do let CasePat pLast eLast _ = NE.last cs' bFalse <- do (_, pertinent) <- generateCond pLast ses eLast' <- internalisePat' pLast pertinent eLast internaliseBody foldM (\bf c' -> eBody $ return $ generateCaseIf ses c' bf) eLast' $ reverse $ NE.init cs' letTupExp' desc =<< generateCaseIf ses c bFalse bindExtSizes (E.toStruct ret) retext res return res -- The "interesting" cases are over, now it's mostly boilerplate. internaliseExp _ (E.Literal v _) = return [I.Constant $ internalisePrimValue v] internaliseExp _ (E.IntLit v (Info t) _) = case t of E.Scalar (E.Prim (E.Signed it)) -> return [I.Constant $ I.IntValue $ intValue it v] E.Scalar (E.Prim (E.Unsigned it)) -> return [I.Constant $ I.IntValue $ intValue it v] E.Scalar (E.Prim (E.FloatType ft)) -> return [I.Constant $ I.FloatValue $ floatValue ft v] _ -> error $ "internaliseExp: nonsensical type for integer literal: " ++ pretty t internaliseExp _ (E.FloatLit v (Info t) _) = case t of E.Scalar (E.Prim (E.FloatType ft)) -> return [I.Constant $ I.FloatValue $ floatValue ft v] _ -> error $ "internaliseExp: nonsensical type for float literal: " ++ pretty t internaliseExp desc (E.If ce te fe (Info ret, Info retext) _) = do ses <- letTupExp' desc =<< eIf (BasicOp . SubExp <$> internaliseExp1 "cond" ce) (internaliseBody te) (internaliseBody fe) bindExtSizes (E.toStruct ret) retext ses return ses -- Builtin operators are handled specially because they are -- overloaded. internaliseExp desc (E.BinOp (op, _) _ (xe, _) (ye, _) _ _ loc) | Just internalise <- isOverloadedFunction op [xe, ye] loc = internalise desc -- User-defined operators are just the same as a function call. internaliseExp desc ( E.BinOp (op, oploc) (Info t) (xarg, Info (xt, xext)) (yarg, Info (yt, yext)) _ (Info retext) loc ) = internaliseExp desc $ E.Apply ( E.Apply (E.Var op (Info t) oploc) xarg (Info (E.diet xt, xext)) (Info $ foldFunType [E.fromStruct yt] t, Info []) loc ) yarg (Info (E.diet yt, yext)) (Info t, Info retext) loc internaliseExp desc (E.Project k e (Info rt) _) = do n <- internalisedTypeSize $ rt `setAliases` () i' <- fmap sum $ mapM internalisedTypeSize $ case E.typeOf e `setAliases` () of E.Scalar (Record fs) -> map snd $ takeWhile ((/= k) . fst) $ sortFields fs t -> [t] take n . drop i' <$> internaliseExp desc e internaliseExp _ [email protected] {} = error $ "internaliseExp: Unexpected lambda at " ++ locStr (srclocOf e) internaliseExp _ [email protected] {} = error $ "internaliseExp: Unexpected operator section at " ++ locStr (srclocOf e) internaliseExp _ [email protected] {} = error $ "internaliseExp: Unexpected left operator section at " ++ locStr (srclocOf e) internaliseExp _ [email protected] {} = error $ "internaliseExp: Unexpected right operator section at " ++ locStr (srclocOf e) internaliseExp _ [email protected] {} = error $ "internaliseExp: Unexpected projection section at " ++ locStr (srclocOf e) internaliseExp _ [email protected] {} = error $ "internaliseExp: Unexpected index section at " ++ locStr (srclocOf e) internaliseArg :: String -> (E.Exp, Maybe VName) -> InternaliseM [SubExp] internaliseArg desc (arg, argdim) = do arg' <- internaliseExp desc arg case (arg', argdim) of ([se], Just d) -> letBindNames [d] $ BasicOp $ SubExp se _ -> return () return arg' generateCond :: E.Pattern -> [I.SubExp] -> InternaliseM (I.SubExp, [I.SubExp]) generateCond orig_p orig_ses = do (cmps, pertinent, _) <- compares orig_p orig_ses cmp <- letSubExp "matches" =<< eAll cmps return (cmp, pertinent) where -- Literals are always primitive values. compares (E.PatternLit e _ _) (se : ses) = do e' <- internaliseExp1 "constant" e t' <- elemType <$> subExpType se cmp <- letSubExp "match_lit" $ I.BasicOp $ I.CmpOp (I.CmpEq t') e' se return ([cmp], [se], ses) compares (E.PatternConstr c (Info (E.Scalar (E.Sum fs))) pats _) (se : ses) = do (payload_ts, m) <- internaliseSumType $ M.map (map toStruct) fs case M.lookup c m of Just (i, payload_is) -> do let i' = intConst Int8 $ toInteger i let (payload_ses, ses') = splitAt (length payload_ts) ses cmp <- letSubExp "match_constr" $ I.BasicOp $ I.CmpOp (I.CmpEq int8) i' se (cmps, pertinent, _) <- comparesMany pats $ map (payload_ses !!) payload_is return (cmp : cmps, pertinent, ses') Nothing -> error "generateCond: missing constructor" compares (E.PatternConstr _ (Info t) _ _) _ = error $ "generateCond: PatternConstr has nonsensical type: " ++ pretty t compares (E.Id _ t loc) ses = compares (E.Wildcard t loc) ses compares (E.Wildcard (Info t) _) ses = do n <- internalisedTypeSize $ E.toStruct t let (id_ses, rest_ses) = splitAt n ses return ([], id_ses, rest_ses) compares (E.PatternParens pat _) ses = compares pat ses compares (E.TuplePattern pats _) ses = comparesMany pats ses compares (E.RecordPattern fs _) ses = comparesMany (map snd $ E.sortFields $ M.fromList fs) ses compares (E.PatternAscription pat _ _) ses = compares pat ses compares pat [] = error $ "generateCond: No values left for pattern " ++ pretty pat comparesMany [] ses = return ([], [], ses) comparesMany (pat : pats) ses = do (cmps1, pertinent1, ses') <- compares pat ses (cmps2, pertinent2, ses'') <- comparesMany pats ses' return ( cmps1 <> cmps2, pertinent1 <> pertinent2, ses'' ) generateCaseIf :: [I.SubExp] -> Case -> I.Body -> InternaliseM I.Exp generateCaseIf ses (CasePat p eCase _) bFail = do (cond, pertinent) <- generateCond p ses eCase' <- internalisePat' p pertinent eCase internaliseBody eIf (eSubExp cond) (return eCase') (return bFail) internalisePat :: String -> E.Pattern -> E.Exp -> E.Exp -> (E.Exp -> InternaliseM a) -> InternaliseM a internalisePat desc p e body m = do ses <- internaliseExp desc' e internalisePat' p ses body m where desc' = case S.toList $ E.patternIdents p of [v] -> baseString $ E.identName v _ -> desc internalisePat' :: E.Pattern -> [I.SubExp] -> E.Exp -> (E.Exp -> InternaliseM a) -> InternaliseM a internalisePat' p ses body m = do ses_ts <- mapM subExpType ses stmPattern p ses_ts $ \pat_names -> do forM_ (zip pat_names ses) $ \(v, se) -> letBindNames [v] $ I.BasicOp $ I.SubExp se m body internaliseSlice :: SrcLoc -> [SubExp] -> [E.DimIndex] -> InternaliseM ([I.DimIndex SubExp], Certificates) internaliseSlice loc dims idxs = do (idxs', oks, parts) <- unzip3 <$> zipWithM internaliseDimIndex dims idxs ok <- letSubExp "index_ok" =<< eAll oks let msg = errorMsg $ ["Index ["] ++ intercalate [", "] parts ++ ["] out of bounds for array of shape ["] ++ intersperse "][" (map ErrorInt32 $ take (length idxs) dims) ++ ["]."] c <- assert "index_certs" ok msg loc return (idxs', c) internaliseDimIndex :: SubExp -> E.DimIndex -> InternaliseM (I.DimIndex SubExp, SubExp, [ErrorMsgPart SubExp]) internaliseDimIndex w (E.DimFix i) = do (i', _) <- internaliseDimExp "i" i let lowerBound = I.BasicOp $ I.CmpOp (I.CmpSle I.Int32) (I.constant (0 :: I.Int32)) i' upperBound = I.BasicOp $ I.CmpOp (I.CmpSlt I.Int32) i' w ok <- letSubExp "bounds_check" =<< eBinOp I.LogAnd (pure lowerBound) (pure upperBound) return (I.DimFix i', ok, [ErrorInt32 i']) -- Special-case an important common case that otherwise leads to horrible code. internaliseDimIndex w ( E.DimSlice Nothing Nothing (Just (E.Negate (E.IntLit 1 _ _) _)) ) = do w_minus_1 <- letSubExp "w_minus_1" $ BasicOp $ I.BinOp (Sub Int32 I.OverflowWrap) w one return ( I.DimSlice w_minus_1 w $ intConst Int32 (-1), constant True, mempty ) where one = constant (1 :: Int32) internaliseDimIndex w (E.DimSlice i j s) = do s' <- maybe (return one) (fmap fst . internaliseDimExp "s") s s_sign <- letSubExp "s_sign" $ BasicOp $ I.UnOp (I.SSignum Int32) s' backwards <- letSubExp "backwards" $ I.BasicOp $ I.CmpOp (I.CmpEq int32) s_sign negone w_minus_1 <- letSubExp "w_minus_1" $ BasicOp $ I.BinOp (Sub Int32 I.OverflowWrap) w one let i_def = letSubExp "i_def" $ I.If backwards (resultBody [w_minus_1]) (resultBody [zero]) $ ifCommon [I.Prim int32] j_def = letSubExp "j_def" $ I.If backwards (resultBody [negone]) (resultBody [w]) $ ifCommon [I.Prim int32] i' <- maybe i_def (fmap fst . internaliseDimExp "i") i j' <- maybe j_def (fmap fst . internaliseDimExp "j") j j_m_i <- letSubExp "j_m_i" $ BasicOp $ I.BinOp (Sub Int32 I.OverflowWrap) j' i' -- Something like a division-rounding-up, but accomodating negative -- operands. let divRounding x y = eBinOp (SQuot Int32 Unsafe) ( eBinOp (Add Int32 I.OverflowWrap) x (eBinOp (Sub Int32 I.OverflowWrap) y (eSignum $ toExp s')) ) y n <- letSubExp "n" =<< divRounding (toExp j_m_i) (toExp s') -- Bounds checks depend on whether we are slicing forwards or -- backwards. If forwards, we must check '0 <= i && i <= j'. If -- backwards, '-1 <= j && j <= i'. In both cases, we check '0 <= -- i+n*s && i+(n-1)*s < w'. We only check if the slice is nonempty. empty_slice <- letSubExp "empty_slice" $ I.BasicOp $ I.CmpOp (CmpEq int32) n zero m <- letSubExp "m" $ I.BasicOp $ I.BinOp (Sub Int32 I.OverflowWrap) n one m_t_s <- letSubExp "m_t_s" $ I.BasicOp $ I.BinOp (Mul Int32 I.OverflowWrap) m s' i_p_m_t_s <- letSubExp "i_p_m_t_s" $ I.BasicOp $ I.BinOp (Add Int32 I.OverflowWrap) i' m_t_s zero_leq_i_p_m_t_s <- letSubExp "zero_leq_i_p_m_t_s" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) zero i_p_m_t_s i_p_m_t_s_leq_w <- letSubExp "i_p_m_t_s_leq_w" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) i_p_m_t_s w i_p_m_t_s_lth_w <- letSubExp "i_p_m_t_s_leq_w" $ I.BasicOp $ I.CmpOp (I.CmpSlt Int32) i_p_m_t_s w zero_lte_i <- letSubExp "zero_lte_i" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) zero i' i_lte_j <- letSubExp "i_lte_j" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) i' j' forwards_ok <- letSubExp "forwards_ok" =<< eAll [zero_lte_i, zero_lte_i, i_lte_j, zero_leq_i_p_m_t_s, i_p_m_t_s_lth_w] negone_lte_j <- letSubExp "negone_lte_j" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) negone j' j_lte_i <- letSubExp "j_lte_i" $ I.BasicOp $ I.CmpOp (I.CmpSle Int32) j' i' backwards_ok <- letSubExp "backwards_ok" =<< eAll [negone_lte_j, negone_lte_j, j_lte_i, zero_leq_i_p_m_t_s, i_p_m_t_s_leq_w] slice_ok <- letSubExp "slice_ok" $ I.If backwards (resultBody [backwards_ok]) (resultBody [forwards_ok]) $ ifCommon [I.Prim I.Bool] ok_or_empty <- letSubExp "ok_or_empty" $ I.BasicOp $ I.BinOp I.LogOr empty_slice slice_ok let parts = case (i, j, s) of (_, _, Just {}) -> [ maybe "" (const $ ErrorInt32 i') i, ":", maybe "" (const $ ErrorInt32 j') j, ":", ErrorInt32 s' ] (_, Just {}, _) -> [ maybe "" (const $ ErrorInt32 i') i, ":", ErrorInt32 j' ] ++ maybe mempty (const [":", ErrorInt32 s']) s (_, Nothing, Nothing) -> [ErrorInt32 i', ":"] return (I.DimSlice i' n s', ok_or_empty, parts) where zero = constant (0 :: Int32) negone = constant (-1 :: Int32) one = constant (1 :: Int32) internaliseScanOrReduce :: String -> String -> (SubExp -> I.Lambda -> [SubExp] -> [VName] -> InternaliseM (SOAC SOACS)) -> (E.Exp, E.Exp, E.Exp, SrcLoc) -> InternaliseM [SubExp] internaliseScanOrReduce desc what f (lam, ne, arr, loc) = do arrs <- internaliseExpToVars (what ++ "_arr") arr nes <- internaliseExp (what ++ "_ne") ne nes' <- forM (zip nes arrs) $ \(ne', arr') -> do rowtype <- I.stripArray 1 <$> lookupType arr' ensureShape "Row shape of input array does not match shape of neutral element" loc rowtype (what ++ "_ne_right_shape") ne' nests <- mapM I.subExpType nes' arrts <- mapM lookupType arrs lam' <- internaliseFoldLambda internaliseLambda lam nests arrts w <- arraysSize 0 <$> mapM lookupType arrs letTupExp' desc . I.Op =<< f w lam' nes' arrs internaliseHist :: String -> E.Exp -> E.Exp -> E.Exp -> E.Exp -> E.Exp -> E.Exp -> SrcLoc -> InternaliseM [SubExp] internaliseHist desc rf hist op ne buckets img loc = do rf' <- internaliseExp1 "hist_rf" rf ne' <- internaliseExp "hist_ne" ne hist' <- internaliseExpToVars "hist_hist" hist buckets' <- letExp "hist_buckets" . BasicOp . SubExp =<< internaliseExp1 "hist_buckets" buckets img' <- internaliseExpToVars "hist_img" img -- reshape neutral element to have same size as the destination array ne_shp <- forM (zip ne' hist') $ \(n, h) -> do rowtype <- I.stripArray 1 <$> lookupType h ensureShape "Row shape of destination array does not match shape of neutral element" loc rowtype "hist_ne_right_shape" n ne_ts <- mapM I.subExpType ne_shp his_ts <- mapM lookupType hist' op' <- internaliseFoldLambda internaliseLambda op ne_ts his_ts -- reshape return type of bucket function to have same size as neutral element -- (modulo the index) bucket_param <- newParam "bucket_p" $ I.Prim int32 img_params <- mapM (newParam "img_p" . rowType) =<< mapM lookupType img' let params = bucket_param : img_params rettype = I.Prim int32 : ne_ts body = mkBody mempty $ map (I.Var . paramName) params body' <- localScope (scopeOfLParams params) $ ensureResultShape "Row shape of value array does not match row shape of hist target" (srclocOf img) rettype body -- get sizes of histogram and image arrays w_hist <- arraysSize 0 <$> mapM lookupType hist' w_img <- arraysSize 0 <$> mapM lookupType img' -- Generate an assertion and reshapes to ensure that buckets' and -- img' are the same size. b_shape <- I.arrayShape <$> lookupType buckets' let b_w = shapeSize 0 b_shape cmp <- letSubExp "bucket_cmp" $ I.BasicOp $ I.CmpOp (I.CmpEq I.int32) b_w w_img c <- assert "bucket_cert" cmp "length of index and value array does not match" loc buckets'' <- certifying c $ letExp (baseString buckets') $ I.BasicOp $ I.Reshape (reshapeOuter [DimCoercion w_img] 1 b_shape) buckets' letTupExp' desc $ I.Op $ I.Hist w_img [HistOp w_hist rf' hist' ne_shp op'] (I.Lambda params body' rettype) $ buckets'' : img' internaliseStreamMap :: String -> StreamOrd -> E.Exp -> E.Exp -> InternaliseM [SubExp] internaliseStreamMap desc o lam arr = do arrs <- internaliseExpToVars "stream_input" arr lam' <- internaliseStreamMapLambda internaliseLambda lam $ map I.Var arrs w <- arraysSize 0 <$> mapM lookupType arrs let form = I.Parallel o Commutative (I.Lambda [] (mkBody mempty []) []) [] letTupExp' desc $ I.Op $ I.Stream w form lam' arrs internaliseStreamRed :: String -> StreamOrd -> Commutativity -> E.Exp -> E.Exp -> E.Exp -> InternaliseM [SubExp] internaliseStreamRed desc o comm lam0 lam arr = do arrs <- internaliseExpToVars "stream_input" arr rowts <- mapM (fmap I.rowType . lookupType) arrs (lam_params, lam_body) <- internaliseStreamLambda internaliseLambda lam rowts let (chunk_param, _, lam_val_params) = partitionChunkedFoldParameters 0 lam_params -- Synthesize neutral elements by applying the fold function -- to an empty chunk. letBindNames [I.paramName chunk_param] $ I.BasicOp $ I.SubExp $ constant (0 :: Int32) forM_ lam_val_params $ \p -> letBindNames [I.paramName p] $ I.BasicOp $ I.Scratch (I.elemType $ I.paramType p) $ I.arrayDims $ I.paramType p nes <- bodyBind =<< renameBody lam_body nes_ts <- mapM I.subExpType nes outsz <- arraysSize 0 <$> mapM lookupType arrs let acc_arr_tps = [I.arrayOf t (I.Shape [outsz]) NoUniqueness | t <- nes_ts] lam0' <- internaliseFoldLambda internaliseLambda lam0 nes_ts acc_arr_tps let lam0_acc_params = take (length nes) $ I.lambdaParams lam0' lam_acc_params <- forM lam0_acc_params $ \p -> do name <- newVName $ baseString $ I.paramName p return p {I.paramName = name} -- Make sure the chunk size parameter comes first. let lam_params' = chunk_param : lam_acc_params ++ lam_val_params body_with_lam0 <- ensureResultShape "shape of result does not match shape of initial value" (srclocOf lam0) nes_ts <=< insertStmsM $ localScope (scopeOfLParams lam_params') $ do lam_res <- bodyBind lam_body lam_res' <- ensureArgShapes "shape of chunk function result does not match shape of initial value" (srclocOf lam) [] (map I.typeOf $ I.lambdaParams lam0') lam_res new_lam_res <- eLambda lam0' $ map eSubExp $ map (I.Var . paramName) lam_acc_params ++ lam_res' return $ resultBody new_lam_res let form = I.Parallel o comm lam0' nes lam' = I.Lambda { lambdaParams = lam_params', lambdaBody = body_with_lam0, lambdaReturnType = nes_ts } w <- arraysSize 0 <$> mapM lookupType arrs letTupExp' desc $ I.Op $ I.Stream w form lam' arrs internaliseExp1 :: String -> E.Exp -> InternaliseM I.SubExp internaliseExp1 desc e = do vs <- internaliseExp desc e case vs of [se] -> return se _ -> error "Internalise.internaliseExp1: was passed not just a single subexpression" -- | Promote to dimension type as appropriate for the original type. -- Also return original type. internaliseDimExp :: String -> E.Exp -> InternaliseM (I.SubExp, IntType) internaliseDimExp s e = do e' <- internaliseExp1 s e case E.typeOf e of E.Scalar (E.Prim (Signed it)) -> (,it) <$> asIntS Int32 e' _ -> error "internaliseDimExp: bad type" internaliseExpToVars :: String -> E.Exp -> InternaliseM [I.VName] internaliseExpToVars desc e = mapM asIdent =<< internaliseExp desc e where asIdent (I.Var v) = return v asIdent se = letExp desc $ I.BasicOp $ I.SubExp se internaliseOperation :: String -> E.Exp -> (I.VName -> InternaliseM I.BasicOp) -> InternaliseM [I.SubExp] internaliseOperation s e op = do vs <- internaliseExpToVars s e letSubExps s =<< mapM (fmap I.BasicOp . op) vs certifyingNonzero :: SrcLoc -> IntType -> SubExp -> InternaliseM a -> InternaliseM a certifyingNonzero loc t x m = do zero <- letSubExp "zero" $ I.BasicOp $ CmpOp (CmpEq (IntType t)) x (intConst t 0) nonzero <- letSubExp "nonzero" $ I.BasicOp $ UnOp Not zero c <- assert "nonzero_cert" nonzero "division by zero" loc certifying c m certifyingNonnegative :: SrcLoc -> IntType -> SubExp -> InternaliseM a -> InternaliseM a certifyingNonnegative loc t x m = do nonnegative <- letSubExp "nonnegative" $ I.BasicOp $ CmpOp (CmpSle t) (intConst t 0) x c <- assert "nonzero_cert" nonnegative "negative exponent" loc certifying c m internaliseBinOp :: SrcLoc -> String -> E.BinOp -> I.SubExp -> I.SubExp -> E.PrimType -> E.PrimType -> InternaliseM [I.SubExp] internaliseBinOp _ desc E.Plus x y (E.Signed t) _ = simpleBinOp desc (I.Add t I.OverflowWrap) x y internaliseBinOp _ desc E.Plus x y (E.Unsigned t) _ = simpleBinOp desc (I.Add t I.OverflowWrap) x y internaliseBinOp _ desc E.Plus x y (E.FloatType t) _ = simpleBinOp desc (I.FAdd t) x y internaliseBinOp _ desc E.Minus x y (E.Signed t) _ = simpleBinOp desc (I.Sub t I.OverflowWrap) x y internaliseBinOp _ desc E.Minus x y (E.Unsigned t) _ = simpleBinOp desc (I.Sub t I.OverflowWrap) x y internaliseBinOp _ desc E.Minus x y (E.FloatType t) _ = simpleBinOp desc (I.FSub t) x y internaliseBinOp _ desc E.Times x y (E.Signed t) _ = simpleBinOp desc (I.Mul t I.OverflowWrap) x y internaliseBinOp _ desc E.Times x y (E.Unsigned t) _ = simpleBinOp desc (I.Mul t I.OverflowWrap) x y internaliseBinOp _ desc E.Times x y (E.FloatType t) _ = simpleBinOp desc (I.FMul t) x y internaliseBinOp loc desc E.Divide x y (E.Signed t) _ = certifyingNonzero loc t y $ simpleBinOp desc (I.SDiv t I.Unsafe) x y internaliseBinOp loc desc E.Divide x y (E.Unsigned t) _ = certifyingNonzero loc t y $ simpleBinOp desc (I.UDiv t I.Unsafe) x y internaliseBinOp _ desc E.Divide x y (E.FloatType t) _ = simpleBinOp desc (I.FDiv t) x y internaliseBinOp _ desc E.Pow x y (E.FloatType t) _ = simpleBinOp desc (I.FPow t) x y internaliseBinOp loc desc E.Pow x y (E.Signed t) _ = certifyingNonnegative loc t y $ simpleBinOp desc (I.Pow t) x y internaliseBinOp _ desc E.Pow x y (E.Unsigned t) _ = simpleBinOp desc (I.Pow t) x y internaliseBinOp loc desc E.Mod x y (E.Signed t) _ = certifyingNonzero loc t y $ simpleBinOp desc (I.SMod t I.Unsafe) x y internaliseBinOp loc desc E.Mod x y (E.Unsigned t) _ = certifyingNonzero loc t y $ simpleBinOp desc (I.UMod t I.Unsafe) x y internaliseBinOp _ desc E.Mod x y (E.FloatType t) _ = simpleBinOp desc (I.FMod t) x y internaliseBinOp loc desc E.Quot x y (E.Signed t) _ = certifyingNonzero loc t y $ simpleBinOp desc (I.SQuot t I.Unsafe) x y internaliseBinOp loc desc E.Quot x y (E.Unsigned t) _ = certifyingNonzero loc t y $ simpleBinOp desc (I.UDiv t I.Unsafe) x y internaliseBinOp loc desc E.Rem x y (E.Signed t) _ = certifyingNonzero loc t y $ simpleBinOp desc (I.SRem t I.Unsafe) x y internaliseBinOp loc desc E.Rem x y (E.Unsigned t) _ = certifyingNonzero loc t y $ simpleBinOp desc (I.UMod t I.Unsafe) x y internaliseBinOp _ desc E.ShiftR x y (E.Signed t) _ = simpleBinOp desc (I.AShr t) x y internaliseBinOp _ desc E.ShiftR x y (E.Unsigned t) _ = simpleBinOp desc (I.LShr t) x y internaliseBinOp _ desc E.ShiftL x y (E.Signed t) _ = simpleBinOp desc (I.Shl t) x y internaliseBinOp _ desc E.ShiftL x y (E.Unsigned t) _ = simpleBinOp desc (I.Shl t) x y internaliseBinOp _ desc E.Band x y (E.Signed t) _ = simpleBinOp desc (I.And t) x y internaliseBinOp _ desc E.Band x y (E.Unsigned t) _ = simpleBinOp desc (I.And t) x y internaliseBinOp _ desc E.Xor x y (E.Signed t) _ = simpleBinOp desc (I.Xor t) x y internaliseBinOp _ desc E.Xor x y (E.Unsigned t) _ = simpleBinOp desc (I.Xor t) x y internaliseBinOp _ desc E.Bor x y (E.Signed t) _ = simpleBinOp desc (I.Or t) x y internaliseBinOp _ desc E.Bor x y (E.Unsigned t) _ = simpleBinOp desc (I.Or t) x y internaliseBinOp _ desc E.Equal x y t _ = simpleCmpOp desc (I.CmpEq $ internalisePrimType t) x y internaliseBinOp _ desc E.NotEqual x y t _ = do eq <- letSubExp (desc ++ "true") $ I.BasicOp $ I.CmpOp (I.CmpEq $ internalisePrimType t) x y fmap pure $ letSubExp desc $ I.BasicOp $ I.UnOp I.Not eq internaliseBinOp _ desc E.Less x y (E.Signed t) _ = simpleCmpOp desc (I.CmpSlt t) x y internaliseBinOp _ desc E.Less x y (E.Unsigned t) _ = simpleCmpOp desc (I.CmpUlt t) x y internaliseBinOp _ desc E.Leq x y (E.Signed t) _ = simpleCmpOp desc (I.CmpSle t) x y internaliseBinOp _ desc E.Leq x y (E.Unsigned t) _ = simpleCmpOp desc (I.CmpUle t) x y internaliseBinOp _ desc E.Greater x y (E.Signed t) _ = simpleCmpOp desc (I.CmpSlt t) y x -- Note the swapped x and y internaliseBinOp _ desc E.Greater x y (E.Unsigned t) _ = simpleCmpOp desc (I.CmpUlt t) y x -- Note the swapped x and y internaliseBinOp _ desc E.Geq x y (E.Signed t) _ = simpleCmpOp desc (I.CmpSle t) y x -- Note the swapped x and y internaliseBinOp _ desc E.Geq x y (E.Unsigned t) _ = simpleCmpOp desc (I.CmpUle t) y x -- Note the swapped x and y internaliseBinOp _ desc E.Less x y (E.FloatType t) _ = simpleCmpOp desc (I.FCmpLt t) x y internaliseBinOp _ desc E.Leq x y (E.FloatType t) _ = simpleCmpOp desc (I.FCmpLe t) x y internaliseBinOp _ desc E.Greater x y (E.FloatType t) _ = simpleCmpOp desc (I.FCmpLt t) y x -- Note the swapped x and y internaliseBinOp _ desc E.Geq x y (E.FloatType t) _ = simpleCmpOp desc (I.FCmpLe t) y x -- Note the swapped x and y -- Relational operators for booleans. internaliseBinOp _ desc E.Less x y E.Bool _ = simpleCmpOp desc I.CmpLlt x y internaliseBinOp _ desc E.Leq x y E.Bool _ = simpleCmpOp desc I.CmpLle x y internaliseBinOp _ desc E.Greater x y E.Bool _ = simpleCmpOp desc I.CmpLlt y x -- Note the swapped x and y internaliseBinOp _ desc E.Geq x y E.Bool _ = simpleCmpOp desc I.CmpLle y x -- Note the swapped x and y internaliseBinOp _ _ op _ _ t1 t2 = error $ "Invalid binary operator " ++ pretty op ++ " with operand types " ++ pretty t1 ++ ", " ++ pretty t2 simpleBinOp :: String -> I.BinOp -> I.SubExp -> I.SubExp -> InternaliseM [I.SubExp] simpleBinOp desc bop x y = letTupExp' desc $ I.BasicOp $ I.BinOp bop x y simpleCmpOp :: String -> I.CmpOp -> I.SubExp -> I.SubExp -> InternaliseM [I.SubExp] simpleCmpOp desc op x y = letTupExp' desc $ I.BasicOp $ I.CmpOp op x y findFuncall :: E.Exp -> InternaliseM ( E.QualName VName, [(E.Exp, Maybe VName)], E.StructType, [VName] ) findFuncall (E.Var fname (Info t) _) = return (fname, [], E.toStruct t, []) findFuncall (E.Apply f arg (Info (_, argext)) (Info ret, Info retext) _) = do (fname, args, _, _) <- findFuncall f return (fname, args ++ [(arg, argext)], E.toStruct ret, retext) findFuncall e = error $ "Invalid function expression in application: " ++ pretty e internaliseLambda :: InternaliseLambda internaliseLambda (E.Parens e _) rowtypes = internaliseLambda e rowtypes internaliseLambda (E.Lambda params body _ (Info (_, rettype)) _) rowtypes = bindingLambdaParams params rowtypes $ \params' -> do body' <- internaliseBody body rettype' <- internaliseLambdaReturnType rettype return (params', body', rettype') internaliseLambda e _ = error $ "internaliseLambda: unexpected expression:\n" ++ pretty e -- | Some operators and functions are overloaded or otherwise special -- - we detect and treat them here. isOverloadedFunction :: E.QualName VName -> [E.Exp] -> SrcLoc -> Maybe (String -> InternaliseM [SubExp]) isOverloadedFunction qname args loc = do guard $ baseTag (qualLeaf qname) <= maxIntrinsicTag let handlers = [ handleSign, handleIntrinsicOps, handleOps, handleSOACs, handleRest ] msum [h args $ baseString $ qualLeaf qname | h <- handlers] where handleSign [x] "sign_i8" = Just $ toSigned I.Int8 x handleSign [x] "sign_i16" = Just $ toSigned I.Int16 x handleSign [x] "sign_i32" = Just $ toSigned I.Int32 x handleSign [x] "sign_i64" = Just $ toSigned I.Int64 x handleSign [x] "unsign_i8" = Just $ toUnsigned I.Int8 x handleSign [x] "unsign_i16" = Just $ toUnsigned I.Int16 x handleSign [x] "unsign_i32" = Just $ toUnsigned I.Int32 x handleSign [x] "unsign_i64" = Just $ toUnsigned I.Int64 x handleSign _ _ = Nothing handleIntrinsicOps [x] s | Just unop <- find ((== s) . pretty) allUnOps = Just $ \desc -> do x' <- internaliseExp1 "x" x fmap pure $ letSubExp desc $ I.BasicOp $ I.UnOp unop x' handleIntrinsicOps [TupLit [x, y] _] s | Just bop <- find ((== s) . pretty) allBinOps = Just $ \desc -> do x' <- internaliseExp1 "x" x y' <- internaliseExp1 "y" y fmap pure $ letSubExp desc $ I.BasicOp $ I.BinOp bop x' y' | Just cmp <- find ((== s) . pretty) allCmpOps = Just $ \desc -> do x' <- internaliseExp1 "x" x y' <- internaliseExp1 "y" y fmap pure $ letSubExp desc $ I.BasicOp $ I.CmpOp cmp x' y' handleIntrinsicOps [x] s | Just conv <- find ((== s) . pretty) allConvOps = Just $ \desc -> do x' <- internaliseExp1 "x" x fmap pure $ letSubExp desc $ I.BasicOp $ I.ConvOp conv x' handleIntrinsicOps _ _ = Nothing -- Short-circuiting operators are magical. handleOps [x, y] "&&" = Just $ \desc -> internaliseExp desc $ E.If x y (E.Literal (E.BoolValue False) mempty) (Info $ E.Scalar $ E.Prim E.Bool, Info []) mempty handleOps [x, y] "||" = Just $ \desc -> internaliseExp desc $ E.If x (E.Literal (E.BoolValue True) mempty) y (Info $ E.Scalar $ E.Prim E.Bool, Info []) mempty -- Handle equality and inequality specially, to treat the case of -- arrays. handleOps [xe, ye] op | Just cmp_f <- isEqlOp op = Just $ \desc -> do xe' <- internaliseExp "x" xe ye' <- internaliseExp "y" ye rs <- zipWithM (doComparison desc) xe' ye' cmp_f desc =<< letSubExp "eq" =<< eAll rs where isEqlOp "!=" = Just $ \desc eq -> letTupExp' desc $ I.BasicOp $ I.UnOp I.Not eq isEqlOp "==" = Just $ \_ eq -> return [eq] isEqlOp _ = Nothing doComparison desc x y = do x_t <- I.subExpType x y_t <- I.subExpType y case x_t of I.Prim t -> letSubExp desc $ I.BasicOp $ I.CmpOp (I.CmpEq t) x y _ -> do let x_dims = I.arrayDims x_t y_dims = I.arrayDims y_t dims_match <- forM (zip x_dims y_dims) $ \(x_dim, y_dim) -> letSubExp "dim_eq" $ I.BasicOp $ I.CmpOp (I.CmpEq int32) x_dim y_dim shapes_match <- letSubExp "shapes_match" =<< eAll dims_match compare_elems_body <- runBodyBinder $ do -- Flatten both x and y. x_num_elems <- letSubExp "x_num_elems" =<< foldBinOp (I.Mul Int32 I.OverflowUndef) (constant (1 :: Int32)) x_dims x' <- letExp "x" $ I.BasicOp $ I.SubExp x y' <- letExp "x" $ I.BasicOp $ I.SubExp y x_flat <- letExp "x_flat" $ I.BasicOp $ I.Reshape [I.DimNew x_num_elems] x' y_flat <- letExp "y_flat" $ I.BasicOp $ I.Reshape [I.DimNew x_num_elems] y' -- Compare the elements. cmp_lam <- cmpOpLambda $ I.CmpEq (elemType x_t) cmps <- letExp "cmps" $ I.Op $ I.Screma x_num_elems (I.mapSOAC cmp_lam) [x_flat, y_flat] -- Check that all were equal. and_lam <- binOpLambda I.LogAnd I.Bool reduce <- I.reduceSOAC [Reduce Commutative and_lam [constant True]] all_equal <- letSubExp "all_equal" $ I.Op $ I.Screma x_num_elems reduce [cmps] return $ resultBody [all_equal] letSubExp "arrays_equal" $ I.If shapes_match compare_elems_body (resultBody [constant False]) $ ifCommon [I.Prim I.Bool] handleOps [x, y] name | Just bop <- find ((name ==) . pretty) [minBound .. maxBound :: E.BinOp] = Just $ \desc -> do x' <- internaliseExp1 "x" x y' <- internaliseExp1 "y" y case (E.typeOf x, E.typeOf y) of (E.Scalar (E.Prim t1), E.Scalar (E.Prim t2)) -> internaliseBinOp loc desc bop x' y' t1 t2 _ -> error "Futhark.Internalise.internaliseExp: non-primitive type in BinOp." handleOps _ _ = Nothing handleSOACs [TupLit [lam, arr] _] "map" = Just $ \desc -> do arr' <- internaliseExpToVars "map_arr" arr lam' <- internaliseMapLambda internaliseLambda lam $ map I.Var arr' w <- arraysSize 0 <$> mapM lookupType arr' letTupExp' desc $ I.Op $ I.Screma w (I.mapSOAC lam') arr' handleSOACs [TupLit [k, lam, arr] _] "partition" = do k' <- fromIntegral <$> fromInt32 k Just $ \_desc -> do arrs <- internaliseExpToVars "partition_input" arr lam' <- internalisePartitionLambda internaliseLambda k' lam $ map I.Var arrs uncurry (++) <$> partitionWithSOACS k' lam' arrs where fromInt32 (Literal (SignedValue (Int32Value k')) _) = Just k' fromInt32 (IntLit k' (Info (E.Scalar (E.Prim (Signed Int32)))) _) = Just $ fromInteger k' fromInt32 _ = Nothing handleSOACs [TupLit [lam, ne, arr] _] "reduce" = Just $ \desc -> internaliseScanOrReduce desc "reduce" reduce (lam, ne, arr, loc) where reduce w red_lam nes arrs = I.Screma w <$> I.reduceSOAC [Reduce Noncommutative red_lam nes] <*> pure arrs handleSOACs [TupLit [lam, ne, arr] _] "reduce_comm" = Just $ \desc -> internaliseScanOrReduce desc "reduce" reduce (lam, ne, arr, loc) where reduce w red_lam nes arrs = I.Screma w <$> I.reduceSOAC [Reduce Commutative red_lam nes] <*> pure arrs handleSOACs [TupLit [lam, ne, arr] _] "scan" = Just $ \desc -> internaliseScanOrReduce desc "scan" reduce (lam, ne, arr, loc) where reduce w scan_lam nes arrs = I.Screma w <$> I.scanSOAC [Scan scan_lam nes] <*> pure arrs handleSOACs [TupLit [op, f, arr] _] "reduce_stream" = Just $ \desc -> internaliseStreamRed desc InOrder Noncommutative op f arr handleSOACs [TupLit [op, f, arr] _] "reduce_stream_per" = Just $ \desc -> internaliseStreamRed desc Disorder Commutative op f arr handleSOACs [TupLit [f, arr] _] "map_stream" = Just $ \desc -> internaliseStreamMap desc InOrder f arr handleSOACs [TupLit [f, arr] _] "map_stream_per" = Just $ \desc -> internaliseStreamMap desc Disorder f arr handleSOACs [TupLit [rf, dest, op, ne, buckets, img] _] "hist" = Just $ \desc -> internaliseHist desc rf dest op ne buckets img loc handleSOACs _ _ = Nothing handleRest [x] "!" = Just $ complementF x handleRest [x] "opaque" = Just $ \desc -> mapM (letSubExp desc . BasicOp . Opaque) =<< internaliseExp "opaque_arg" x handleRest [E.TupLit [a, si, v] _] "scatter" = Just $ scatterF a si v handleRest [E.TupLit [n, m, arr] _] "unflatten" = Just $ \desc -> do arrs <- internaliseExpToVars "unflatten_arr" arr n' <- internaliseExp1 "n" n m' <- internaliseExp1 "m" m -- The unflattened dimension needs to have the same number of elements -- as the original dimension. old_dim <- I.arraysSize 0 <$> mapM lookupType arrs dim_ok <- letSubExp "dim_ok" =<< eCmpOp (I.CmpEq I.int32) (eBinOp (I.Mul Int32 I.OverflowUndef) (eSubExp n') (eSubExp m')) (eSubExp old_dim) dim_ok_cert <- assert "dim_ok_cert" dim_ok "new shape has different number of elements than old shape" loc certifying dim_ok_cert $ forM arrs $ \arr' -> do arr_t <- lookupType arr' letSubExp desc $ I.BasicOp $ I.Reshape (reshapeOuter [DimNew n', DimNew m'] 1 $ I.arrayShape arr_t) arr' handleRest [arr] "flatten" = Just $ \desc -> do arrs <- internaliseExpToVars "flatten_arr" arr forM arrs $ \arr' -> do arr_t <- lookupType arr' let n = arraySize 0 arr_t m = arraySize 1 arr_t k <- letSubExp "flat_dim" $ I.BasicOp $ I.BinOp (Mul Int32 I.OverflowUndef) n m letSubExp desc $ I.BasicOp $ I.Reshape (reshapeOuter [DimNew k] 2 $ I.arrayShape arr_t) arr' handleRest [TupLit [x, y] _] "concat" = Just $ \desc -> do xs <- internaliseExpToVars "concat_x" x ys <- internaliseExpToVars "concat_y" y outer_size <- arraysSize 0 <$> mapM lookupType xs let sumdims xsize ysize = letSubExp "conc_tmp" $ I.BasicOp $ I.BinOp (I.Add I.Int32 I.OverflowUndef) xsize ysize ressize <- foldM sumdims outer_size =<< mapM (fmap (arraysSize 0) . mapM lookupType) [ys] let conc xarr yarr = I.BasicOp $ I.Concat 0 xarr [yarr] ressize letSubExps desc $ zipWith conc xs ys handleRest [TupLit [offset, e] _] "rotate" = Just $ \desc -> do offset' <- internaliseExp1 "rotation_offset" offset internaliseOperation desc e $ \v -> do r <- I.arrayRank <$> lookupType v let zero = intConst Int32 0 offsets = offset' : replicate (r -1) zero return $ I.Rotate offsets v handleRest [e] "transpose" = Just $ \desc -> internaliseOperation desc e $ \v -> do r <- I.arrayRank <$> lookupType v return $ I.Rearrange ([1, 0] ++ [2 .. r -1]) v handleRest [TupLit [x, y] _] "zip" = Just $ \desc -> (++) <$> internaliseExp (desc ++ "_zip_x") x <*> internaliseExp (desc ++ "_zip_y") y handleRest [x] "unzip" = Just $ flip internaliseExp x handleRest [x] "trace" = Just $ flip internaliseExp x handleRest [x] "break" = Just $ flip internaliseExp x handleRest _ _ = Nothing toSigned int_to e desc = do e' <- internaliseExp1 "trunc_arg" e case E.typeOf e of E.Scalar (E.Prim E.Bool) -> letTupExp' desc $ I.If e' (resultBody [intConst int_to 1]) (resultBody [intConst int_to 0]) $ ifCommon [I.Prim $ I.IntType int_to] E.Scalar (E.Prim (E.Signed int_from)) -> letTupExp' desc $ I.BasicOp $ I.ConvOp (I.SExt int_from int_to) e' E.Scalar (E.Prim (E.Unsigned int_from)) -> letTupExp' desc $ I.BasicOp $ I.ConvOp (I.ZExt int_from int_to) e' E.Scalar (E.Prim (E.FloatType float_from)) -> letTupExp' desc $ I.BasicOp $ I.ConvOp (I.FPToSI float_from int_to) e' _ -> error "Futhark.Internalise: non-numeric type in ToSigned" toUnsigned int_to e desc = do e' <- internaliseExp1 "trunc_arg" e case E.typeOf e of E.Scalar (E.Prim E.Bool) -> letTupExp' desc $ I.If e' (resultBody [intConst int_to 1]) (resultBody [intConst int_to 0]) $ ifCommon [I.Prim $ I.IntType int_to] E.Scalar (E.Prim (E.Signed int_from)) -> letTupExp' desc $ I.BasicOp $ I.ConvOp (I.ZExt int_from int_to) e' E.Scalar (E.Prim (E.Unsigned int_from)) -> letTupExp' desc $ I.BasicOp $ I.ConvOp (I.ZExt int_from int_to) e' E.Scalar (E.Prim (E.FloatType float_from)) -> letTupExp' desc $ I.BasicOp $ I.ConvOp (I.FPToUI float_from int_to) e' _ -> error "Futhark.Internalise.internaliseExp: non-numeric type in ToUnsigned" complementF e desc = do e' <- internaliseExp1 "complement_arg" e et <- subExpType e' case et of I.Prim (I.IntType t) -> letTupExp' desc $ I.BasicOp $ I.UnOp (I.Complement t) e' I.Prim I.Bool -> letTupExp' desc $ I.BasicOp $ I.UnOp I.Not e' _ -> error "Futhark.Internalise.internaliseExp: non-int/bool type in Complement" scatterF a si v desc = do si' <- letExp "write_si" . BasicOp . SubExp =<< internaliseExp1 "write_arg_i" si svs <- internaliseExpToVars "write_arg_v" v sas <- internaliseExpToVars "write_arg_a" a si_shape <- I.arrayShape <$> lookupType si' let si_w = shapeSize 0 si_shape sv_ts <- mapM lookupType svs svs' <- forM (zip svs sv_ts) $ \(sv, sv_t) -> do let sv_shape = I.arrayShape sv_t sv_w = arraySize 0 sv_t -- Generate an assertion and reshapes to ensure that sv and si' are the same -- size. cmp <- letSubExp "write_cmp" $ I.BasicOp $ I.CmpOp (I.CmpEq I.int32) si_w sv_w c <- assert "write_cert" cmp "length of index and value array does not match" loc certifying c $ letExp (baseString sv ++ "_write_sv") $ I.BasicOp $ I.Reshape (reshapeOuter [DimCoercion si_w] 1 sv_shape) sv indexType <- rowType <$> lookupType si' indexName <- newVName "write_index" valueNames <- replicateM (length sv_ts) $ newVName "write_value" sa_ts <- mapM lookupType sas let bodyTypes = replicate (length sv_ts) indexType ++ map rowType sa_ts paramTypes = indexType : map rowType sv_ts bodyNames = indexName : valueNames bodyParams = zipWith I.Param bodyNames paramTypes -- This body is pretty boring right now, as every input is exactly the output. -- But it can get funky later on if fused with something else. body <- localScope (scopeOfLParams bodyParams) $ insertStmsM $ do let outs = replicate (length valueNames) indexName ++ valueNames results <- forM outs $ \name -> letSubExp "write_res" $ I.BasicOp $ I.SubExp $ I.Var name ensureResultShape "scatter value has wrong size" loc bodyTypes $ resultBody results let lam = I.Lambda { I.lambdaParams = bodyParams, I.lambdaReturnType = bodyTypes, I.lambdaBody = body } sivs = si' : svs' let sa_ws = map (arraySize 0) sa_ts letTupExp' desc $ I.Op $ I.Scatter si_w lam sivs $ zip3 sa_ws (repeat 1) sas funcall :: String -> QualName VName -> [SubExp] -> SrcLoc -> InternaliseM ([SubExp], [I.ExtType]) funcall desc (QualName _ fname) args loc = do (fname', closure, shapes, value_paramts, fun_params, rettype_fun) <- lookupFunction fname argts <- mapM subExpType args shapeargs <- argShapes shapes fun_params argts let diets = replicate (length closure + length shapeargs) I.ObservePrim ++ map I.diet value_paramts args' <- ensureArgShapes "function arguments of wrong shape" loc (map I.paramName fun_params) (map I.paramType fun_params) (map I.Var closure ++ shapeargs ++ args) argts' <- mapM subExpType args' case rettype_fun $ zip args' argts' of Nothing -> error $ "Cannot apply " ++ pretty fname ++ " to arguments\n " ++ pretty args' ++ "\nof types\n " ++ pretty argts' ++ "\nFunction has parameters\n " ++ pretty fun_params Just ts -> do safety <- askSafety attrs <- asks envAttrs ses <- attributing attrs $ letTupExp' desc $ I.Apply fname' (zip args' diets) ts (safety, loc, mempty) return (ses, map I.fromDecl ts) -- Bind existential names defined by an expression, based on the -- concrete values that expression evaluated to. This most -- importantly should be done after function calls, but also -- everything else that can produce existentials in the source -- language. bindExtSizes :: E.StructType -> [VName] -> [SubExp] -> InternaliseM () bindExtSizes ret retext ses = do ts <- internaliseType ret ses_ts <- mapM subExpType ses let combine t1 t2 = mconcat $ zipWith combine' (arrayExtDims t1) (arrayDims t2) combine' (I.Free (I.Var v)) se | v `elem` retext = M.singleton v se combine' _ _ = mempty forM_ (M.toList $ mconcat $ zipWith combine ts ses_ts) $ \(v, se) -> letBindNames [v] $ BasicOp $ SubExp se askSafety :: InternaliseM Safety askSafety = do check <- asks envDoBoundsChecks return $ if check then I.Safe else I.Unsafe -- Implement partitioning using maps, scans and writes. partitionWithSOACS :: Int -> I.Lambda -> [I.VName] -> InternaliseM ([I.SubExp], [I.SubExp]) partitionWithSOACS k lam arrs = do arr_ts <- mapM lookupType arrs let w = arraysSize 0 arr_ts classes_and_increments <- letTupExp "increments" $ I.Op $ I.Screma w (mapSOAC lam) arrs (classes, increments) <- case classes_and_increments of classes : increments -> return (classes, take k increments) _ -> error "partitionWithSOACS" add_lam_x_params <- replicateM k $ I.Param <$> newVName "x" <*> pure (I.Prim int32) add_lam_y_params <- replicateM k $ I.Param <$> newVName "y" <*> pure (I.Prim int32) add_lam_body <- runBodyBinder $ localScope (scopeOfLParams $ add_lam_x_params ++ add_lam_y_params) $ fmap resultBody $ forM (zip add_lam_x_params add_lam_y_params) $ \(x, y) -> letSubExp "z" $ I.BasicOp $ I.BinOp (I.Add Int32 I.OverflowUndef) (I.Var $ I.paramName x) (I.Var $ I.paramName y) let add_lam = I.Lambda { I.lambdaBody = add_lam_body, I.lambdaParams = add_lam_x_params ++ add_lam_y_params, I.lambdaReturnType = replicate k $ I.Prim int32 } nes = replicate (length increments) $ constant (0 :: Int32) scan <- I.scanSOAC [I.Scan add_lam nes] all_offsets <- letTupExp "offsets" $ I.Op $ I.Screma w scan increments -- We have the offsets for each of the partitions, but we also need -- the total sizes, which are the last elements in the offests. We -- just have to be careful in case the array is empty. last_index <- letSubExp "last_index" $ I.BasicOp $ I.BinOp (I.Sub Int32 OverflowUndef) w $ constant (1 :: Int32) nonempty_body <- runBodyBinder $ fmap resultBody $ forM all_offsets $ \offset_array -> letSubExp "last_offset" $ I.BasicOp $ I.Index offset_array [I.DimFix last_index] let empty_body = resultBody $ replicate k $ constant (0 :: Int32) is_empty <- letSubExp "is_empty" $ I.BasicOp $ I.CmpOp (CmpEq int32) w $ constant (0 :: Int32) sizes <- letTupExp "partition_size" $ I.If is_empty empty_body nonempty_body $ ifCommon $ replicate k $ I.Prim int32 -- The total size of all partitions must necessarily be equal to the -- size of the input array. -- Create scratch arrays for the result. blanks <- forM arr_ts $ \arr_t -> letExp "partition_dest" $ I.BasicOp $ Scratch (elemType arr_t) (w : drop 1 (I.arrayDims arr_t)) -- Now write into the result. write_lam <- do c_param <- I.Param <$> newVName "c" <*> pure (I.Prim int32) offset_params <- replicateM k $ I.Param <$> newVName "offset" <*> pure (I.Prim int32) value_params <- forM arr_ts $ \arr_t -> I.Param <$> newVName "v" <*> pure (I.rowType arr_t) (offset, offset_stms) <- collectStms $ mkOffsetLambdaBody (map I.Var sizes) (I.Var $ I.paramName c_param) 0 offset_params return I.Lambda { I.lambdaParams = c_param : offset_params ++ value_params, I.lambdaReturnType = replicate (length arr_ts) (I.Prim int32) ++ map I.rowType arr_ts, I.lambdaBody = mkBody offset_stms $ replicate (length arr_ts) offset ++ map (I.Var . I.paramName) value_params } results <- letTupExp "partition_res" $ I.Op $ I.Scatter w write_lam (classes : all_offsets ++ arrs) $ zip3 (repeat w) (repeat 1) blanks sizes' <- letSubExp "partition_sizes" $ I.BasicOp $ I.ArrayLit (map I.Var sizes) $ I.Prim int32 return (map I.Var results, [sizes']) where mkOffsetLambdaBody :: [SubExp] -> SubExp -> Int -> [I.LParam] -> InternaliseM SubExp mkOffsetLambdaBody _ _ _ [] = return $ constant (-1 :: Int32) mkOffsetLambdaBody sizes c i (p : ps) = do is_this_one <- letSubExp "is_this_one" $ I.BasicOp $ I.CmpOp (CmpEq int32) c $ intConst Int32 $ toInteger i next_one <- mkOffsetLambdaBody sizes c (i + 1) ps this_one <- letSubExp "this_offset" =<< foldBinOp (Add Int32 OverflowUndef) (constant (-1 :: Int32)) (I.Var (I.paramName p) : take i sizes) letSubExp "total_res" $ I.If is_this_one (resultBody [this_one]) (resultBody [next_one]) $ ifCommon [I.Prim int32] typeExpForError :: E.TypeExp VName -> InternaliseM [ErrorMsgPart SubExp] typeExpForError (E.TEVar qn _) = return [ErrorString $ pretty qn] typeExpForError (E.TEUnique te _) = ("*" :) <$> typeExpForError te typeExpForError (E.TEArray te d _) = do d' <- dimExpForError d te' <- typeExpForError te return $ ["[", d', "]"] ++ te' typeExpForError (E.TETuple tes _) = do tes' <- mapM typeExpForError tes return $ ["("] ++ intercalate [", "] tes' ++ [")"] typeExpForError (E.TERecord fields _) = do fields' <- mapM onField fields return $ ["{"] ++ intercalate [", "] fields' ++ ["}"] where onField (k, te) = (ErrorString (pretty k ++ ": ") :) <$> typeExpForError te typeExpForError (E.TEArrow _ t1 t2 _) = do t1' <- typeExpForError t1 t2' <- typeExpForError t2 return $ t1' ++ [" -> "] ++ t2' typeExpForError (E.TEApply t arg _) = do t' <- typeExpForError t arg' <- case arg of TypeArgExpType argt -> typeExpForError argt TypeArgExpDim d _ -> pure <$> dimExpForError d return $ t' ++ [" "] ++ arg' typeExpForError (E.TESum cs _) = do cs' <- mapM (onClause . snd) cs return $ intercalate [" | "] cs' where onClause c = do c' <- mapM typeExpForError c return $ intercalate [" "] c' dimExpForError :: E.DimExp VName -> InternaliseM (ErrorMsgPart SubExp) dimExpForError (DimExpNamed d _) = do substs <- lookupSubst $ E.qualLeaf d d' <- case substs of Just [v] -> return v _ -> return $ I.Var $ E.qualLeaf d return $ ErrorInt32 d' dimExpForError (DimExpConst d _) = return $ ErrorString $ pretty d dimExpForError DimExpAny = return "" -- A smart constructor that compacts neighbouring literals for easier -- reading in the IR. errorMsg :: [ErrorMsgPart a] -> ErrorMsg a errorMsg = ErrorMsg . compact where compact [] = [] compact (ErrorString x : ErrorString y : parts) = compact (ErrorString (x ++ y) : parts) compact (x : y) = x : compact y
1
0.634164
0.843752
A new book has been published on Hindi movie music, called "Gaata Rahe Mera Dil". It is written by Anirudha Bhattacharjee and Balaji Vittal, who had earlier written a book called "R D Burman- the man and the music". The book discusss 50 classic songs from the history of Hindi movies. The oldest song is from" Street Singer" ( 1937) sung by K L Saigal. The latest is "Ae ajnabi tu bhi kabhi" (Dil Se). The 300 page book published by Harper Collins discusses the history and other information behind the making of the songs. This book is getting good reviews. Unfortunately this book is not available in small towns (I am a small town person living in a small town). From the review of this book as well as from what I read in the earlier book on R D Burman, I find that the "Technical" details that the authors discuss about the songs is nothing but bullcrap. Here is what the authors say about the "Dosti" (1964) song- Mera toh jo bhi kadam hai. The composers use two Komal notes—Dha and Ni—in the mukhra where all the other five shudh notes give the song a major-scale colour. The antara, with the emphasis shifting to a Komal Re, creates the transitory feel of a change in scale. The use of Komal notes—the Ga and Ni—creates an aura of unconventiality and underlines the desolate cry of grief…” I feel that the above "technical" discussion makes no sense whatsoever. I request SL, our technical guru, to tell us what he thinks of the above comment on the song. S L, if you can get hold of this book, please give us your review of this book. Will do. FIrst I am unable to recollect this song. Have to hear it.The book I will have to order over amazon I guess given I am in a smaller town than what you are in.. I certainly will do the prelims as far as this song is concerned. The extract you have given itself is sort of iffy there.. My preliminary thoughts. First off, this song is in rag Yaman Kalayan. Mood for this raga is "serene" and "haunting" (depending on the rendition) but I would not necessarily call it grief.. a more appropriate word would be feeling lost, or feeling of longing but not grief. The "haunting" nature comes from the mM (yes mM and not M) rather than anything else which makes in yaman kalyan instead of pure yaman (which would have a "M" instead. ). Compare that to Koi jab tumhara hriday tod de, tadapta hua jab koi chod de.. (Kalyan) which is again 'haunting' not sad.. I think the author intended to use the term melancholous instead of "sad" as in tragic or rondu as we would say in hindi. (cos of my leg, I cannot really play my KB right now (position of the foot pedals pls the distance were the KB will have to be cos of my leg and my back posture etc, and so cannot really "judge" by ear alone the impact of the komal dha and ni as the author suggests.. so I will not say they are wrong but my priliminary thought suggest the mM being more uumphaatic than anything else.) m=Pure maM=Dirgh/tivra Ma (ma does not have a komal). (Note: Modern musicians consider Yaman, Kalyan (carnatic eq Kalyani) and Yaman-kalyan as one and the same but traditionalists consider them three distinct ragas). Also given it is Yaman-Kalyan (I am 100% certain) there is no room for any kind of komal dha or ni there!! where did the author hear that is a big question for me. The raga has all shuddha swars except for the tivra Ma! BTW kalyan can be romantic and very at that. Yaad rahega payaar ka ye, rangeen zamaana yaad rahega. While songs like Tasvir Banaata Hun, Tasvir Nahin Banati are in raag pahaadi) are extremely haunting.. so it is more of the lyrics plus progression rather than the scale that gives the "feel" even though the notes have about 50% contribution. In some cases, it can be complete opposite feel.. same raga..eg Nache Man Mora Magan Dhig Dha Dhigi Dhigi (Bhairavi) vs Mujhko is raat ki tanhahi mein aawaaz na do... or Tera jaana dil ke armanon ka (both bhairavi again). (I know raja pai would have my pants off if not for this disclaimer!! He knows his music stuff). Thanks for your prompt impromptu technical review. And what about the authors going technobabble viz. "... give the song a major-scale colour." There is not concept of major scale or its equivalent in Hindustani classical music. We have "raag" in Hindustani classical music instead. And also "....creates the transitory feel of a change in scale. " Do they mean anything whatsoever, or they are supposed to impress the ignorants. Honestly I have never come across such jargon ever. Trasitionary nature basically means nothing.. there is no such thing. You change scales as in a ragamalika but what exactly a transitionary nature of a scale is foreign to me. And another thing, the major and minor "nature" of a scale is determined by its aroha avaroha progression (pakad) and not by gamaks at all. I am not sure I understand that piece of gobledygook at all. And another point.. Major scale usually signifies (in western jargon), an upbeat song.. like lungi dance lungi dance.. the one that gets your blood pumping and a minor scale is more serene, romatic and all those soft feel (hotel california).. so how can a minor scale impart a Major scale color is beyond my technical know how. I editied a few posts above for accuracy (I type as thoughts come to me). Change of raga or mix of them is raga malika. But shift of scales does not change the raga or the feel.. for instance Jai ho.. in the ending crescendo, the scale shifts up by half a note but the ENTIRE progression shifts up. It only adds the "fervor" part or emphasis to the feel.. like say from fan to a die hard fan kinds... same is there in the MJ number man in the mirror in the ending stages. One can argue that it might shift moods.. like in case of "pyarr hamen kis mod pe le aaya ke dil kare hai, koi ye bataye kyaa hoga".. when it becomes a fast dance number in the end.. but notice it starts out as a COMICAL/frivolous/chewtiyaagiri song at the outset (only pretending to be a sad song).. and becomes ultra comic by the end (movie satte pe satta). The authors were expecting that people will not question all this technobabble rather get "impressed" by it. I suspect that they have come up with similar "technical" analysis in case of all the 50 songs that they have discussed in the book. That should make it a very interesting read, though not in a manner that the authors intended. peaceful wrote:Can you teach me how to download only one song from you tube? Because when I post certain song, others already open on right hand side.I want to learn more.I heard you are my technical gurus. Yes SC, it would be fun to see their treatment of rather esoteric ragas like bilaval, asavari etc.. those are dyamic and the most dynamic (Strictly in my opinion) is malkauns. I think they will use rather quixotic phrases.. like blanket ones to suggest something very exotic but then would not really be saying anything... I love such highly educational pieces A review of this book was posted in a blog by someone who is my facebook friend as well as the facebook friend of the author of this book. I have questioned the "technical" analysis of this song as mentioned in the review. The author refused to accept that the song is in Yaman Kalyan. Please have a look at the vehement denial by the author in his comments in this post: Well then what raga is it based on? I am willing to be proven wrong but with my limited and rusty knowledge, I am almost willing to bet a bit that it is YAMAN KALYAN. what else is that? I cannot think of any other raga that it may belong to. Also the argument "Tonic does not change" hmm.. what does he mean by that? Modulation or something else See, ragas are based on notes and both komal and shuddh and teevra are notes by their own right. Changing from the shudh to komal or dirga, usually will NOT change the tonic (And note tonic can be loosely termed as vadi in hindustani).. modulation is a tonic shift and that also is not quite the same as a raga jump... instances of that happen all the time with NO shift in raga at all (abrupt in kind) (Mozarts K160). But lets not get into all that. I think the authors want a reasonable book to sell.. Let them. Why pee in their party man. People will read it with passing facination, remember some old songs, feel educated and they get some money..all is well.. no harm done. Intrsting. I will have to play it to confirm then. Patdeep ka ek gaana hai megha chaye aadhi raat baran ban gayi niniyaa.. trying to hum those together does not give me the feel.. I will have to play it on the kb to be sure. I am still going to stick to yaman kalyan for the interim (apne aap ko wrong bolne me sharm aata hai naa khud ko..). Madhuvanti mein even if not a good example I can think of only one song and that is way obscure one.. Rasm-e-Ulfat Ko Nibhaen To Nibhaen Kaise from movie Dil Ki Rahen and hindustani/carnatic music is an ocean.. I got trained for a few years and that in a mixed martial arts style.. a little boxing and karate and taikwando thrown in.. i.e hindustani, carnatic and western theory.. so it is impossible that I will be an "expert" to know all the ragas out there and recognize them(jack of all perhaps but master not at all ). There are tons of ragas that I might not even heard or or styles there of.. It is highly possible that I am wrong here but my gut feel stll leans towards yaman-kalyan or its variant. But I will not say now that it is a 100% sure case as I did earlier. Ab your other friend has firmly planted that doubt in my bird brain. Yaar re to hai aaroha mein.. which both patdeep and madhuvanti do not have as far as I can recall! Jara confirm karoge from your friend? But hindi movies do not conform to the rules of classical music strictly. So vo bhi ek locha hai.
2
0.67022
0.214157
Predictors of neurobehavioral symptoms in a university population: a multivariate approach using a postconcussive symptom questionnaire. Several factors have been linked to severity of postconcussive-type (neurobehavioral) symptoms. In this study, predictors of neurobehavioral symptoms were examined using multivariate methods to determine the relative importance of each. Data regarding demographics, symptoms, current alcohol use, history of traumatic brain injury (TBI), orthopedic injuries, and psychiatric/developmental diagnoses were collected via questionnaire from 3027 university students. The most prominent predictors of symptoms were gender, history of depression or anxiety, history of attention-deficit/hyperactivity disorder or learning disability diagnosis, and frequency of alcohol use. Prior mild TBI was significantly related to overall symptoms, but this effect was small in comparison to other predictors. These results provide further evidence that neurobehavioral symptoms are multi-determined phenomena, and highlight the importance of psychiatric comorbidity, demographic factors, and health behaviors to neurobehavioral symptom presentation after mild TBI.
2
1.909925
0.993555
The Difficult Crossing The Difficult Crossing (La traversée difficile) is the name given to two oil-on-canvas paintings by the Belgian surrealist René Magritte. The original version was completed in 1926 during Magritte's early prolific years of surrealism and is currently held in a private collection. A later version was completed in 1963 and is also held in a private collection. The 1926 version The 1926 version contains a number of curious elements, some of which are common to many of Magritte's works. The bilboquet or baluster (the object which looks like the bishop from a chess set) first appears in the painting The Lost Jockey (1926). In this and some other works—for example The Secret Player (1927) and The Art of Conversation (1961)—the bilboquet seems to play an inanimate role analogous to a tree or plant. In other instances, such as here with The Difficult Crossing, the bilboquet is given the anthropomorphic feature of a single eye. Another common feature of Magritte's works seen here is the ambiguity between windows and paintings. The back of the room shows a boat in a thunderstorm, but the viewer is left to wonder if the depiction is a painting or the view out a window. Magritte elevated the idea to another level in his series of works based on The Human Condition where "outdoor" paintings and windows both appear and even overlap. Near the bilboquet stands a table. On the top, a disembodied hand is holding a red bird, as if clutching it. The front right leg of the table resembles a human leg. The 1963 version In the 1963 version, a number of elements have changed or disappeared. Instead of taking place in a room, the action has moved outside. There is no table or hand clutching a bird and the scene of the rough sea in the ambiguous window/painting at the rear becomes the entire new background. Near the front a low brick wall is seen with a bilboquet behind and a suited figure with an eyeball for a head in front. There is ambiguity as to whether the suited figure is a man or another bilboquet. Some bilboquet figures, for example those in The Encounter (1929), have similar eyeball heads, however the suit covers the body and no clear identification can be made. If the suited figure is a man, it could be a self-portrait, which means that the eyeball is covering his face. Covering Magritte's face with an object was another common theme for himself, Son of Man being a good example. Relation to other paintings Both versions of The Difficult Crossing show a strong similarity to Magritte's painting The Birth of the Idol, also from 1926. The scene is outside and depicts a rough sea in the background (this time without ship). Objects which appear include a bilboquet (the non-anthropomorphic variety), a mannequin arm (similar to the hand which clutches the bird) and a wooden board with window-like holes cut out which is nearly identical to those flanking both sides of the room in earlier version. All three paintings may have been inspired by Giorgio de Chirico's Metaphysical Interior (1916) which features a room with a number of strange objects and an ambiguous window/painting showing a boat. Magritte was certainly aware of De Chirico's work and was emotionally moved by his first viewing of a reproduction of Song of Love (1913–14). References Category:Paintings by René Magritte Category:Surrealist paintings Category:1926 paintings Category:1963 paintings Category:Maritime paintings
2
1.264238
0.807753
God’s 4 Signs to Moses: The Staff That Turned into a Serpent The LORD said to him, “What is that in your hand?” And he said, “A staff.” Then He said, “Throw it on the ground.” So he threw it on the ground, and it became a serpent; and Moses fled from it. But the LORD said to Moses, “Stretch out your hand and grasp it by its tail”—so he stretched out his hand and caught it, and it became a staff in his hand… – Exodus 4:2-4 Oh Father, our heavenly Father, wash us in the blood of Your Son again today. Let us come to you with a clean conscious. Wash us in the Water of your Word. Open the eyes of our hearts to see Christ, receive Christ, and enjoy Christ. In Jesus’ Name, which is above all names, Amen! In Egypt, Moses had the highest education and was skilled in speech (Acts 7:22). When the Lord came to give Moses the revelation that He would free the Israelites, Moses tried to achieve the Lord’s will in his own strength and ended up killing an Egyptian. Trying in his own strength, just led to death. The Lord took Moses through 40 years of death in the wilderness. Moses was a broken man when the Lord appeared to him again. Moses could no longer speak well (Exodus 4:10) and he needed a staff for walking. We all have many staffs that we rely on. Every earthly thing that you rely on for your daily living is a staff. Your career is a staff. Your education is a staff. Your news is a staff. Your hobby is a staff. Your caffeine is a staff. Your sweets are a staff. Your family is a staff. Your car is a staff. Your intellect is a staff. Your emotions are a staff. Your television is a staff. Your video games are a staff. Your internet is a staff. There is no Life in these staffs. These staffs are dead wood. When Moses threw his staff down it became a serpent. Not only is your staff dead, but it is also a serpent. Whatever you rely on, other than the Lord, becomes a serpent. Your staffs must be thrown down and put to death. Only then, by the Lord’s command, can you grasp the serpent by the tail, and lift the staff up in resurrection. Don’t abandon your family. Don’t set up a law in your heart on what you may eat or drink. But turn to the Lord, beholding Him and giving Him your ear. Give everything that you rely on to Him and let Him give you back what you need in resurrection. In resurrection, you are no longer reliant upon the staff, you are reliant on God alone. Father, we cast our staffs down before you. Open our eyes to see the things that your enemy uses to usurp your rightful throne in our hearts. You provide everything we need as we seek Your Kingdom and Your Righteousness first. Let us grasp the enemy by the tail in Your Resurrection Life, that we may be overcomers in Your Victory. In Jesus’ Name, Amen. 1 If I’d know Christ’s risen power. I must ever love the Cross; Life from death alone arises; There’s no gain except by loss. Chorus If no death, no life, If no death, no life; Life from death alone arises; If no death, no life. 2 If I’d have Christ formed within me, I must breathe my final breath, Live within the Cross’s shadow, Put my soul-life e’er to death. 3 If God thru th’ Eternal Spirit Nail me ever with the Lord; Only then as death is working Will His life thru me be poured.
2
0.949564
0.026974
# -*- coding: utf-8 -*- from ... import OratorTestCase from orator import Model as BaseModel from orator.orm import ( morph_to, has_one, has_many, belongs_to_many, morph_many, belongs_to, ) from orator.orm.model import ModelRegister from orator.connections import SQLiteConnection from orator.connectors.sqlite_connector import SQLiteConnector class DecoratorsTestCase(OratorTestCase): @classmethod def setUpClass(cls): Model.set_connection_resolver(DatabaseIntegrationConnectionResolver()) @classmethod def tearDownClass(cls): Model.unset_connection_resolver() def setUp(self): with self.schema().create("test_users") as table: table.increments("id") table.string("email").unique() table.timestamps() with self.schema().create("test_friends") as table: table.increments("id") table.integer("user_id") table.integer("friend_id") with self.schema().create("test_posts") as table: table.increments("id") table.integer("user_id") table.string("name") table.timestamps() table.soft_deletes() with self.schema().create("test_photos") as table: table.increments("id") table.morphs("imageable") table.string("name") table.timestamps() def tearDown(self): self.schema().drop("test_users") self.schema().drop("test_friends") self.schema().drop("test_posts") self.schema().drop("test_photos") def test_extra_queries_are_properly_set_on_relations(self): self.create() # With eager loading user = OratorTestUser.with_("friends", "posts", "post", "photos").find(1) post = OratorTestPost.with_("user", "photos").find(1) self.assertEqual(1, len(user.friends)) self.assertEqual(2, len(user.posts)) self.assertIsInstance(user.post, OratorTestPost) self.assertEqual(3, len(user.photos)) self.assertIsInstance(post.user, OratorTestUser) self.assertEqual(2, len(post.photos)) self.assertEqual( 'SELECT * FROM "test_users" INNER JOIN "test_friends" ON "test_users"."id" = "test_friends"."friend_id" ' 'WHERE "test_friends"."user_id" = ? ORDER BY "friend_id" ASC', user.friends().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_posts" WHERE "deleted_at" IS NULL AND "test_posts"."user_id" = ?', user.posts().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_posts" WHERE "test_posts"."user_id" = ? ORDER BY "name" DESC', user.post().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_photos" WHERE "name" IS NOT NULL AND "test_photos"."imageable_id" = ? AND "test_photos"."imageable_type" = ?', user.photos().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_users" WHERE "test_users"."id" = ? ORDER BY "id" ASC', post.user().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_photos" WHERE "test_photos"."imageable_id" = ? AND "test_photos"."imageable_type" = ?', post.photos().to_sql(), ) # Without eager loading user = OratorTestUser.find(1) post = OratorTestPost.find(1) self.assertEqual(1, len(user.friends)) self.assertEqual(2, len(user.posts)) self.assertIsInstance(user.post, OratorTestPost) self.assertEqual(3, len(user.photos)) self.assertIsInstance(post.user, OratorTestUser) self.assertEqual(2, len(post.photos)) self.assertEqual( 'SELECT * FROM "test_users" INNER JOIN "test_friends" ON "test_users"."id" = "test_friends"."friend_id" ' 'WHERE "test_friends"."user_id" = ? ORDER BY "friend_id" ASC', user.friends().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_posts" WHERE "deleted_at" IS NULL AND "test_posts"."user_id" = ?', user.posts().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_posts" WHERE "test_posts"."user_id" = ? ORDER BY "name" DESC', user.post().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_photos" WHERE "name" IS NOT NULL AND "test_photos"."imageable_id" = ? AND "test_photos"."imageable_type" = ?', user.photos().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_users" WHERE "test_users"."id" = ? ORDER BY "id" ASC', post.user().to_sql(), ) self.assertEqual( 'SELECT * FROM "test_photos" WHERE "test_photos"."imageable_id" = ? AND "test_photos"."imageable_type" = ?', post.photos().to_sql(), ) self.assertEqual( 'SELECT DISTINCT * FROM "test_posts" WHERE "deleted_at" IS NULL AND "test_posts"."user_id" = ? ORDER BY "user_id" ASC', user.posts().order_by("user_id").distinct().to_sql(), ) def create(self): user = OratorTestUser.create(id=1, email="[email protected]") friend = OratorTestUser.create(id=2, email="[email protected]") user.friends().attach(friend) post1 = user.posts().create(name="First Post") post2 = user.posts().create(name="Second Post") user.photos().create(name="Avatar 1") user.photos().create(name="Avatar 2") user.photos().create(name="Avatar 3") post1.photos().create(name="Hero 1") post1.photos().create(name="Hero 2") def connection(self): return Model.get_connection_resolver().connection() def schema(self): return self.connection().get_schema_builder() class Model(BaseModel): _register = ModelRegister() class OratorTestUser(Model): __table__ = "test_users" __guarded__ = [] @belongs_to_many("test_friends", "user_id", "friend_id", with_pivot=["id"]) def friends(self): return OratorTestUser.order_by("friend_id") @has_many("user_id") def posts(self): return OratorTestPost.where_null("deleted_at") @has_one("user_id") def post(self): return OratorTestPost.order_by("name", "desc") @morph_many("imageable") def photos(self): return OratorTestPhoto.where_not_null("name") class OratorTestPost(Model): __table__ = "test_posts" __guarded__ = [] @belongs_to("user_id") def user(self): return OratorTestUser.order_by("id") @morph_many("imageable") def photos(self): return "test_photos" class OratorTestPhoto(Model): __table__ = "test_photos" __guarded__ = [] @morph_to def imageable(self): return class DatabaseIntegrationConnectionResolver(object): _connection = None def connection(self, name=None): if self._connection: return self._connection self._connection = SQLiteConnection( SQLiteConnector().connect({"database": ":memory:"}) ) return self._connection def get_default_connection(self): return "default" def set_default_connection(self, name): pass
1
0.679477
0.992622
Saturday, May 30, 2009 On solid grounds: Campground owners hope that spending on improvements pays off with more visitors in slumping economy From the Wisconsin State Journal By Marv Balousek While some businesses may be making cutbacks because of the recession, Wisconsin's private campground owners have been spending money to make their properties more attractive to prospective customers this year. They've invested hundreds of thousands of dollars to improve their facilities even without the benefit of federal stimulus dollars. The owners expect to cash in as families scale back their Disney World plans this summer in favor of less-expensive weekend camping trips. Reservations are up this year for the 16-week season that began on Memorial Day weekend, according to two Wisconsin campground owners. "Camping, even in stressful times, can be the outdoor activity of choice,"said Bud Styer, who operates five Wisconsin campgrounds. "People with families especially are still going to recreate and they're going to do something with their kids." Styer said he is spending $565,000 this year at his five campgrounds and expects to recoup that investment in three to five years through camping fees. He's spent money on things such as a Jumping Pillow for Baraboo Hills Campground north of Baraboo, blacktop for a circle around the pond at Merry Mac's Campground near Merrimac and a remodeled camp store at River Bend Campground, which he manages but doesn't own, west of Watertown. River Bend, which features a 300-foot water slide, was closed last summer because of extensive flooding when the Crawfish River overflowed its banks. It didn't reopen until August. Styer said the campground had to be cleaned before improvements were made. He also has upgraded Smokey Hollow Campground near Lodi and Tilleda Falls Campground west of Shawano. Water-related features such as Water Wars -- a competition with water balloons -- or floating water slides and climbing walls are popular improvements at many parks. "Years ago, we camped in a Coleman tent with a kerosene lantern," Styer said. "Nowadays, everybody's got to have electric, water, box fans and rope lights." Styer said he's a great believer in "stuff" and that the more stuff you have, the more you can charge for campsites. A private Wisconsin campground with amenities can charge $39 to $50 a night, he said, compared to $25 to $35 a night for a standard campground. "If you want to expand your business and generate additional revenues, then you have to have a better facility,"he said. "It has to have the bells and whistles. People are going to camp closer to home and look for the best value." Upgrading campground facilities this year is a national trend, said Linda Pfofaizer, president of the National Association of RV Parks and Campgrounds in Larkspur, Colo. The association represents 8,000 private campground owners. Although the investments could benefit them this summer, she said, most campground owners also are looking beyond the recession, "The recession is temporary," she said. "Most campground and RV park operators believe that it behooves them to move forward with their improvement plans to remain competitive with other travel and tourism options." "We try to keep adding what the customers are asking for," he said. "A few years ago, during a downturn, there were many people who didn't travel West or take a large vacation, and we're seeing that again." At Fox Hill RV Park south of Wisconsin Dells near Ho Chunk Casino, roads have been repaved with recycled asphalt, the pool was retiled, the bath house was remodeled and a disc golf course was added, said owner Jim Tracy. He said the overall construction slowdown helped him negotiate a good deal on the bath house remodeling. "I'm still pretty bullish on the summer," Tracy said. "I want to give (campers) reasons to come back and talk me up to their friends and families." Bud Styer Media Bud Styer, left, and Keith Stachurski, manager of Smokey Hollow Campground near Lodi, confer at a beach area of the campground. Styer has invested $565,000 this year in improvements at the five campgrounds he operates. Zachary Zirbel cuts the grass at Smokey Hollow Campground as he prepares the sites for another influx of weekend campers. Stachurski patrols Smokey Hollow on a Segway, a small electric vehicle. He also offers riding lessons to campers. The red structure behind him is used for Spaceball, a game that combines the skills of trampoline and basketball. Furnished conastoga wagons and beachfront yurts are among the camping options at Smokey Hollow Campground near Lodi. Children play on a Jumping Pillow at Chetek River Campground near Chetek, north of Eau Claire. A row of furnished yurts, or circular tents, is another camping option at Merry Mac's Campground in Merrimac.
1
0.851217
0.174457
Q: Drag and drop in jquery I didn't get the id of dropped div on dropping on another div i get the id_droppable i but didn't get the id_dropped div The alert id_dropped give undefined as result Please help me verify my code and correct my error. $(".full-circle").droppable({ accept: ".unseated_guest", drop: function(event, ui) { var id_droppable = this.id; alert(id_droppable); var id_dropped = ui.id; alert(id_dropped); var name=document.getElementById("div_name_0").value; $(this).css("background-color","red"); $(this).append(ui.draggable); //$(this).draggable('disable'); } }); A: The ui parameter does not have an id attribute, as it is a reference to the element being drug. You need to get the id like ui.draggable.attr('id'), or whatever method you prefer to get the id of an element. $(".full-circle").droppable({ accept: ".unseated_guest", drop: function(event, ui) { //Stuff above var id_dropped = ui.draggable.attr('id'); alert(id_dropped); //Stuff below } });
1
0.861836
0.056718
Girmal Falls General This waterfall extends to a height of up to 100 feet, making it the highest waterfall of Gujarat. The picturesque beauty of this site makes it popular among visitors and people of the region alike. The water swiftly falls from a great height, creating a fog like condition that’s eye catching. The government of this state is working on many projects to make this place an ideal picnic spot and a tourist attraction. The fall comes to its best form at the time of monsoon and provides an immensely striking appearance. Some of the best natural features of Gujarat can be explored in this place. This place is a nice and refreshing retreat for any traveler.
1
1.045086
0.477685
Be afraid, England and Wales 2019. The Aussies are coming. Or rather, the Aussies are still coming, after an 86-run defeat of a New Zealand team who seemed consumed by the occasion at Lord’s. At times in the Black Caps’ attempts to chase 243 this felt a bit like a Sunday morning junior age group game. Steve Smith sent down some weird, wonky all-sorts. Wickets were greeted with jokey huddles. It took the return of Mitchell Starc to restore a sense of World Cup order, figures of five for 26 reflecting a spell of brutal, high-grade, white-ball fast-bowling that blew away the tail. Pakistan’s Imad Wasim holds nerve to see off Afghanistan in thriller Read more Victory leaves Australia on their own at the top of the group stage table with seven wins from eight, and with some of their own question marks finding an answer or two. They had some help along the way, not least from Kane Williamson’s diffident captaincy. On a sun-baked north London day New Zealand had first shown how to beat Australia; then almost immediately they showed how to fail to beat Australia. Exposing that thin-looking middle order had always looked a plan. Failing to punch through by taking off your best bowlers was where the game got away, captured by the sight of the skipper wheeling out seven overs of mid-innings part-time leg-spin. Trent Boult even had time at the end of Australia’s innings to conjure a largely pointless World Cup hat-trick. Instead it was a gutsy, occasionally streaky 107-run sixth-wicket partnership between Usman Khawaja and Alex Carey that decided this game. From the start Lord’s was a place of Trans-Tasman good cheer as the grey shroud of the last few weeks lifted. Australia had won the toss and elected to bat. In any list of David Warner’s top five career sledges, the line “You’re not f-ing facing Trent Boult’s 80mph half-volleys now, mate” – yelled at Joe Root as he took guard during the Cardiff Ashes Test of 2015 – might just make it on grounds of subtlety alone. This time it was Warner’s turn to face the Boult music, a tricky prospect at the start of a heat-hazed day. Boult’s third over saw Aaron Finch out lbw falling over an inswinger. Colin de Grandhomme shared the new ball, toiling in manfully from the nursery end like a man with a two-seat sofa strapped to his back. But it was Lockie Ferguson who made the most telling incision. Ferguson was a joy to watch, a thrillingly athletic fast bowler with an air of the old school adventurer about him, so much so you half expect to see him handing the umpire his fedora and bull-whip before every over. Here Ferguson took out Warner and Steve Smith for two runs in seven balls. First he bounced out Warner. Smith was booed on. And Ferguson soon did for him too, thanks to another moment of brilliance. Smith pulled another short one, middling it with a lovely, sweet clump. At short backward square leg Martin Guptill dived full length and stuck out a hand. Eventually he stood up, raised his hand and threw a ball – apparently the same one – into the sky. It was a catch that will look good in replay. In real time it was a moment to stop the days and spin it back on its axis. James Neesham entered the attack and 81 for three became 81 for four as Marcus Stoinis was caught behind, before Neesham held a one-handed caught and bowled just above the grass to get rid of Glenn Maxwell. New Zealand had Australia wobbling around the ring at five for 92 after 21 overs. But Khawaja found a partner in Carey, who clipped and carved at assorted short-pitch offerings as New Zealand struggled to adapt their length to his punchy style. The fifty partnership arrived off 51 balls, at the same time as Khawaja’s own half-century, an innings that will be doubly satisfying on a day when no one else in Australia’s top six got to 25. Carey inside-edged to the pavilion fence to reach a battling 51 off 41 balls. There is a jaunty fearlessness to his cricket. Best of all he averages 50 now at No 7 for Australia and has made that tricky slot a position of strength in the last month. There will be regrets for New Zealand. Not least in Boult’s disappearance from the attack until the 42nd over. Their chase never really got started. Jason Behrendorff dismissed both openers and a 20-over score of 61 for two deteriorated to 157 all out as only Williamson seemed to have the skill to score on a crabby pitch. Australia were talked down at the World Cup’s start as a team overly reliant on five star players. At Lord’s it was the underrated back-up cast who dug in to turn this game, maintaining the air of a team finding other gears as this tournament narrows towards its end point.
1
0.845096
0.401543
#!/usr/bin/env python3 import argparse import common import functools import multiprocessing import os import os.path import pathlib import re import subprocess import stat import sys import traceback import shutil import paths EXCLUDED_PREFIXES = ("./generated/", "./thirdparty/", "./build", "./.git/", "./bazel-", "./.cache", "./source/extensions/extensions_build_config.bzl", "./bazel/toolchains/configs/", "./tools/testdata/check_format/", "./tools/pyformat/", "./third_party/") SUFFIXES = ("BUILD", "WORKSPACE", ".bzl", ".cc", ".h", ".java", ".m", ".md", ".mm", ".proto", ".rst") DOCS_SUFFIX = (".md", ".rst") PROTO_SUFFIX = (".proto") # Files in these paths can make reference to protobuf stuff directly GOOGLE_PROTOBUF_ALLOWLIST = ("ci/prebuilt", "source/common/protobuf", "api/test") REPOSITORIES_BZL = "bazel/repositories.bzl" # Files matching these exact names can reference real-world time. These include the class # definitions for real-world time, the construction of them in main(), and perf annotation. # For now it includes the validation server but that really should be injected too. REAL_TIME_ALLOWLIST = ("./source/common/common/utility.h", "./source/extensions/common/aws/utility.cc", "./source/common/event/real_time_system.cc", "./source/common/event/real_time_system.h", "./source/exe/main_common.cc", "./source/exe/main_common.h", "./source/server/config_validation/server.cc", "./source/common/common/perf_annotation.h", "./test/common/common/log_macros_test.cc", "./test/test_common/simulated_time_system.cc", "./test/test_common/simulated_time_system.h", "./test/test_common/test_time.cc", "./test/test_common/test_time.h", "./test/test_common/utility.cc", "./test/test_common/utility.h", "./test/integration/integration.h") # Tests in these paths may make use of the Registry::RegisterFactory constructor or the # REGISTER_FACTORY macro. Other locations should use the InjectFactory helper class to # perform temporary registrations. REGISTER_FACTORY_TEST_ALLOWLIST = ("./test/common/config/registry_test.cc", "./test/integration/clusters/", "./test/integration/filters/") # Files in these paths can use MessageLite::SerializeAsString SERIALIZE_AS_STRING_ALLOWLIST = ( "./source/common/config/version_converter.cc", "./source/common/protobuf/utility.cc", "./source/extensions/filters/http/grpc_json_transcoder/json_transcoder_filter.cc", "./test/common/protobuf/utility_test.cc", "./test/common/config/version_converter_test.cc", "./test/common/grpc/codec_test.cc", "./test/common/grpc/codec_fuzz_test.cc", "./test/extensions/filters/http/common/fuzz/uber_filter.h", ) # Files in these paths can use Protobuf::util::JsonStringToMessage JSON_STRING_TO_MESSAGE_ALLOWLIST = ("./source/common/protobuf/utility.cc") # Histogram names which are allowed to be suffixed with the unit symbol, all of the pre-existing # ones were grandfathered as part of PR #8484 for backwards compatibility. HISTOGRAM_WITH_SI_SUFFIX_ALLOWLIST = ("downstream_cx_length_ms", "downstream_cx_length_ms", "initialization_time_ms", "loop_duration_us", "poll_delay_us", "request_time_ms", "upstream_cx_connect_ms", "upstream_cx_length_ms") # Files in these paths can use std::regex STD_REGEX_ALLOWLIST = ( "./source/common/common/utility.cc", "./source/common/common/regex.h", "./source/common/common/regex.cc", "./source/common/stats/tag_extractor_impl.h", "./source/common/stats/tag_extractor_impl.cc", "./source/common/formatter/substitution_formatter.cc", "./source/extensions/filters/http/squash/squash_filter.h", "./source/extensions/filters/http/squash/squash_filter.cc", "./source/server/admin/utils.h", "./source/server/admin/utils.cc", "./source/server/admin/stats_handler.h", "./source/server/admin/stats_handler.cc", "./source/server/admin/prometheus_stats.h", "./source/server/admin/prometheus_stats.cc", "./tools/clang_tools/api_booster/main.cc", "./tools/clang_tools/api_booster/proto_cxx_utils.cc", "./source/common/version/version.cc") # Only one C++ file should instantiate grpc_init GRPC_INIT_ALLOWLIST = ("./source/common/grpc/google_grpc_context.cc") # These files should not throw exceptions. Add HTTP/1 when exceptions removed. EXCEPTION_DENYLIST = ("./source/common/http/http2/codec_impl.h", "./source/common/http/http2/codec_impl.cc") CLANG_FORMAT_PATH = os.getenv("CLANG_FORMAT", "clang-format-10") BUILDIFIER_PATH = paths.getBuildifier() BUILDOZER_PATH = paths.getBuildozer() ENVOY_BUILD_FIXER_PATH = os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), "envoy_build_fixer.py") HEADER_ORDER_PATH = os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), "header_order.py") SUBDIR_SET = set(common.includeDirOrder()) INCLUDE_ANGLE = "#include <" INCLUDE_ANGLE_LEN = len(INCLUDE_ANGLE) PROTO_PACKAGE_REGEX = re.compile(r"^package (\S+);\n*", re.MULTILINE) X_ENVOY_USED_DIRECTLY_REGEX = re.compile(r'.*\"x-envoy-.*\".*') DESIGNATED_INITIALIZER_REGEX = re.compile(r"\{\s*\.\w+\s*\=") MANGLED_PROTOBUF_NAME_REGEX = re.compile(r"envoy::[a-z0-9_:]+::[A-Z][a-z]\w*_\w*_[A-Z]{2}") HISTOGRAM_SI_SUFFIX_REGEX = re.compile(r"(?<=HISTOGRAM\()[a-zA-Z0-9_]+_(b|kb|mb|ns|us|ms|s)(?=,)") TEST_NAME_STARTING_LOWER_CASE_REGEX = re.compile(r"TEST(_.\(.*,\s|\()[a-z].*\)\s\{") EXTENSIONS_CODEOWNERS_REGEX = re.compile(r'.*(extensions[^@]*\s+)(@.*)') COMMENT_REGEX = re.compile(r"//|\*") DURATION_VALUE_REGEX = re.compile(r'\b[Dd]uration\(([0-9.]+)') PROTO_VALIDATION_STRING = re.compile(r'\bmin_bytes\b') VERSION_HISTORY_NEW_LINE_REGEX = re.compile("\* ([a-z \-_]+): ([a-z:`]+)") VERSION_HISTORY_SECTION_NAME = re.compile("^[A-Z][A-Za-z ]*$") RELOADABLE_FLAG_REGEX = re.compile(".*(.)(envoy.reloadable_features.[^ ]*)\s.*") # Check for punctuation in a terminal ref clause, e.g. # :ref:`panic mode. <arch_overview_load_balancing_panic_threshold>` REF_WITH_PUNCTUATION_REGEX = re.compile(".*\. <[^<]*>`\s*") DOT_MULTI_SPACE_REGEX = re.compile("\\. +") # yapf: disable PROTOBUF_TYPE_ERRORS = { # Well-known types should be referenced from the ProtobufWkt namespace. "Protobuf::Any": "ProtobufWkt::Any", "Protobuf::Empty": "ProtobufWkt::Empty", "Protobuf::ListValue": "ProtobufWkt::ListValue", "Protobuf::NULL_VALUE": "ProtobufWkt::NULL_VALUE", "Protobuf::StringValue": "ProtobufWkt::StringValue", "Protobuf::Struct": "ProtobufWkt::Struct", "Protobuf::Value": "ProtobufWkt::Value", # Other common mis-namespacing of protobuf types. "ProtobufWkt::Map": "Protobuf::Map", "ProtobufWkt::MapPair": "Protobuf::MapPair", "ProtobufUtil::MessageDifferencer": "Protobuf::util::MessageDifferencer" } LIBCXX_REPLACEMENTS = { "absl::make_unique<": "std::make_unique<", } UNOWNED_EXTENSIONS = { "extensions/filters/http/ratelimit", "extensions/filters/http/buffer", "extensions/filters/http/rbac", "extensions/filters/http/ip_tagging", "extensions/filters/http/tap", "extensions/filters/http/health_check", "extensions/filters/http/cors", "extensions/filters/http/ext_authz", "extensions/filters/http/dynamo", "extensions/filters/http/lua", "extensions/filters/http/common", "extensions/filters/common", "extensions/filters/common/ratelimit", "extensions/filters/common/rbac", "extensions/filters/common/lua", "extensions/filters/listener/original_dst", "extensions/filters/listener/proxy_protocol", "extensions/stat_sinks/statsd", "extensions/stat_sinks/common", "extensions/stat_sinks/common/statsd", "extensions/health_checkers/redis", "extensions/access_loggers/grpc", "extensions/access_loggers/file", "extensions/common/tap", "extensions/transport_sockets/raw_buffer", "extensions/transport_sockets/tap", "extensions/tracers/zipkin", "extensions/tracers/dynamic_ot", "extensions/tracers/opencensus", "extensions/tracers/lightstep", "extensions/tracers/common", "extensions/tracers/common/ot", "extensions/retry/host/previous_hosts", "extensions/filters/network/ratelimit", "extensions/filters/network/client_ssl_auth", "extensions/filters/network/rbac", "extensions/filters/network/tcp_proxy", "extensions/filters/network/echo", "extensions/filters/network/ext_authz", "extensions/filters/network/redis_proxy", "extensions/filters/network/kafka", "extensions/filters/network/kafka/broker", "extensions/filters/network/kafka/protocol", "extensions/filters/network/kafka/serialization", "extensions/filters/network/mongo_proxy", "extensions/filters/network/common", "extensions/filters/network/common/redis", } # yapf: enable class FormatChecker: def __init__(self, args): self.operation_type = args.operation_type self.target_path = args.target_path self.api_prefix = args.api_prefix self.api_shadow_root = args.api_shadow_prefix self.envoy_build_rule_check = not args.skip_envoy_build_rule_check self.namespace_check = args.namespace_check self.namespace_check_excluded_paths = args.namespace_check_excluded_paths + [ "./tools/api_boost/testdata/", "./tools/clang_tools/", ] self.build_fixer_check_excluded_paths = args.build_fixer_check_excluded_paths + [ "./bazel/external/", "./bazel/toolchains/", "./bazel/BUILD", "./tools/clang_tools", ] self.include_dir_order = args.include_dir_order # Map a line transformation function across each line of a file, # writing the result lines as requested. # If there is a clang format nesting or mismatch error, return the first occurrence def evaluateLines(self, path, line_xform, write=True): error_message = None format_flag = True output_lines = [] for line_number, line in enumerate(self.readLines(path)): if line.find("// clang-format off") != -1: if not format_flag and error_message is None: error_message = "%s:%d: %s" % (path, line_number + 1, "clang-format nested off") format_flag = False if line.find("// clang-format on") != -1: if format_flag and error_message is None: error_message = "%s:%d: %s" % (path, line_number + 1, "clang-format nested on") format_flag = True if format_flag: output_lines.append(line_xform(line, line_number)) else: output_lines.append(line) # We used to use fileinput in the older Python 2.7 script, but this doesn't do # inplace mode and UTF-8 in Python 3, so doing it the manual way. if write: pathlib.Path(path).write_text('\n'.join(output_lines), encoding='utf-8') if not format_flag and error_message is None: error_message = "%s:%d: %s" % (path, line_number + 1, "clang-format remains off") return error_message # Obtain all the lines in a given file. def readLines(self, path): return self.readFile(path).split('\n') # Read a UTF-8 encoded file as a str. def readFile(self, path): return pathlib.Path(path).read_text(encoding='utf-8') # lookPath searches for the given executable in all directories in PATH # environment variable. If it cannot be found, empty string is returned. def lookPath(self, executable): return shutil.which(executable) or '' # pathExists checks whether the given path exists. This function assumes that # the path is absolute and evaluates environment variables. def pathExists(self, executable): return os.path.exists(os.path.expandvars(executable)) # executableByOthers checks whether the given path has execute permission for # others. def executableByOthers(self, executable): st = os.stat(os.path.expandvars(executable)) return bool(st.st_mode & stat.S_IXOTH) # Check whether all needed external tools (clang-format, buildifier, buildozer) are # available. def checkTools(self): error_messages = [] clang_format_abs_path = self.lookPath(CLANG_FORMAT_PATH) if clang_format_abs_path: if not self.executableByOthers(clang_format_abs_path): error_messages.append("command {} exists, but cannot be executed by other " "users".format(CLANG_FORMAT_PATH)) else: error_messages.append( "Command {} not found. If you have clang-format in version 10.x.x " "installed, but the binary name is different or it's not available in " "PATH, please use CLANG_FORMAT environment variable to specify the path. " "Examples:\n" " export CLANG_FORMAT=clang-format-10.0.0\n" " export CLANG_FORMAT=/opt/bin/clang-format-10\n" " export CLANG_FORMAT=/usr/local/opt/llvm@10/bin/clang-format".format( CLANG_FORMAT_PATH)) def checkBazelTool(name, path, var): bazel_tool_abs_path = self.lookPath(path) if bazel_tool_abs_path: if not self.executableByOthers(bazel_tool_abs_path): error_messages.append("command {} exists, but cannot be executed by other " "users".format(path)) elif self.pathExists(path): if not self.executableByOthers(path): error_messages.append("command {} exists, but cannot be executed by other " "users".format(path)) else: error_messages.append("Command {} not found. If you have {} installed, but the binary " "name is different or it's not available in $GOPATH/bin, please use " "{} environment variable to specify the path. Example:\n" " export {}=`which {}`\n" "If you don't have {} installed, you can install it by:\n" " go get -u github.com/bazelbuild/buildtools/{}".format( path, name, var, var, name, name, name)) checkBazelTool('buildifier', BUILDIFIER_PATH, 'BUILDIFIER_BIN') checkBazelTool('buildozer', BUILDOZER_PATH, 'BUILDOZER_BIN') return error_messages def checkNamespace(self, file_path): for excluded_path in self.namespace_check_excluded_paths: if file_path.startswith(excluded_path): return [] nolint = "NOLINT(namespace-%s)" % self.namespace_check.lower() text = self.readFile(file_path) if not re.search("^\s*namespace\s+%s\s*{" % self.namespace_check, text, re.MULTILINE) and \ not nolint in text: return [ "Unable to find %s namespace or %s for file: %s" % (self.namespace_check, nolint, file_path) ] return [] def packageNameForProto(self, file_path): package_name = None error_message = [] result = PROTO_PACKAGE_REGEX.search(self.readFile(file_path)) if result is not None and len(result.groups()) == 1: package_name = result.group(1) if package_name is None: error_message = ["Unable to find package name for proto file: %s" % file_path] return [package_name, error_message] # To avoid breaking the Lyft import, we just check for path inclusion here. def allowlistedForProtobufDeps(self, file_path): return (file_path.endswith(PROTO_SUFFIX) or file_path.endswith(REPOSITORIES_BZL) or \ any(path_segment in file_path for path_segment in GOOGLE_PROTOBUF_ALLOWLIST)) # Real-world time sources should not be instantiated in the source, except for a few # specific cases. They should be passed down from where they are instantied to where # they need to be used, e.g. through the ServerInstance, Dispatcher, or ClusterManager. def allowlistedForRealTime(self, file_path): if file_path.endswith(".md"): return True return file_path in REAL_TIME_ALLOWLIST def allowlistedForRegisterFactory(self, file_path): if not file_path.startswith("./test/"): return True return any(file_path.startswith(prefix) for prefix in REGISTER_FACTORY_TEST_ALLOWLIST) def allowlistedForSerializeAsString(self, file_path): return file_path in SERIALIZE_AS_STRING_ALLOWLIST or file_path.endswith(DOCS_SUFFIX) def allowlistedForJsonStringToMessage(self, file_path): return file_path in JSON_STRING_TO_MESSAGE_ALLOWLIST def allowlistedForHistogramSiSuffix(self, name): return name in HISTOGRAM_WITH_SI_SUFFIX_ALLOWLIST def allowlistedForStdRegex(self, file_path): return file_path.startswith("./test") or file_path in STD_REGEX_ALLOWLIST or file_path.endswith( DOCS_SUFFIX) def allowlistedForGrpcInit(self, file_path): return file_path in GRPC_INIT_ALLOWLIST def allowlistedForUnpackTo(self, file_path): return file_path.startswith("./test") or file_path in [ "./source/common/protobuf/utility.cc", "./source/common/protobuf/utility.h" ] def denylistedForExceptions(self, file_path): # Returns true when it is a non test header file or the file_path is in DENYLIST or # it is under toos/testdata subdirectory. if file_path.endswith(DOCS_SUFFIX): return False return (file_path.endswith('.h') and not file_path.startswith("./test/")) or file_path in EXCEPTION_DENYLIST \ or self.isInSubdir(file_path, 'tools/testdata') def isApiFile(self, file_path): return file_path.startswith(self.api_prefix) or file_path.startswith(self.api_shadow_root) def isBuildFile(self, file_path): basename = os.path.basename(file_path) if basename in {"BUILD", "BUILD.bazel"} or basename.endswith(".BUILD"): return True return False def isExternalBuildFile(self, file_path): return self.isBuildFile(file_path) and (file_path.startswith("./bazel/external/") or file_path.startswith("./tools/clang_tools")) def isStarlarkFile(self, file_path): return file_path.endswith(".bzl") def isWorkspaceFile(self, file_path): return os.path.basename(file_path) == "WORKSPACE" def isBuildFixerExcludedFile(self, file_path): for excluded_path in self.build_fixer_check_excluded_paths: if file_path.startswith(excluded_path): return True return False def hasInvalidAngleBracketDirectory(self, line): if not line.startswith(INCLUDE_ANGLE): return False path = line[INCLUDE_ANGLE_LEN:] slash = path.find("/") if slash == -1: return False subdir = path[0:slash] return subdir in SUBDIR_SET def checkCurrentReleaseNotes(self, file_path, error_messages): first_word_of_prior_line = '' next_word_to_check = '' # first word after : prior_line = '' def endsWithPeriod(prior_line): if not prior_line: return True # Don't punctuation-check empty lines. if prior_line.endswith('.'): return True # Actually ends with . if prior_line.endswith('`') and REF_WITH_PUNCTUATION_REGEX.match(prior_line): return True # The text in the :ref ends with a . return False for line_number, line in enumerate(self.readLines(file_path)): def reportError(message): error_messages.append("%s:%d: %s" % (file_path, line_number + 1, message)) if VERSION_HISTORY_SECTION_NAME.match(line): if line == "Deprecated": # The deprecations section is last, and does not have enforced formatting. break # Reset all parsing at the start of a section. first_word_of_prior_line = '' next_word_to_check = '' # first word after : prior_line = '' # make sure flags are surrounded by ``s flag_match = RELOADABLE_FLAG_REGEX.match(line) if flag_match: if not flag_match.groups()[0].startswith('`'): reportError("Flag `%s` should be enclosed in back ticks" % flag_match.groups()[1]) if line.startswith("* "): if not endsWithPeriod(prior_line): reportError("The following release note does not end with a '.'\n %s" % prior_line) match = VERSION_HISTORY_NEW_LINE_REGEX.match(line) if not match: reportError("Version history line malformed. " "Does not match VERSION_HISTORY_NEW_LINE_REGEX in check_format.py\n %s" % line) else: first_word = match.groups()[0] next_word = match.groups()[1] # Do basic alphabetization checks of the first word on the line and the # first word after the : if first_word_of_prior_line and first_word_of_prior_line > first_word: reportError( "Version history not in alphabetical order (%s vs %s): please check placement of line\n %s. " % (first_word_of_prior_line, first_word, line)) if first_word_of_prior_line == first_word and next_word_to_check and next_word_to_check > next_word: reportError( "Version history not in alphabetical order (%s vs %s): please check placement of line\n %s. " % (next_word_to_check, next_word, line)) first_word_of_prior_line = first_word next_word_to_check = next_word prior_line = line elif not line: # If we hit the end of this release note block block, check the prior line. if not endsWithPeriod(prior_line): reportError("The following release note does not end with a '.'\n %s" % prior_line) elif prior_line: prior_line += line def checkFileContents(self, file_path, checker): error_messages = [] if file_path.endswith("version_history/current.rst"): # Version file checking has enough special cased logic to merit its own checks. # This only validates entries for the current release as very old release # notes have a different format. self.checkCurrentReleaseNotes(file_path, error_messages) def checkFormatErrors(line, line_number): def reportError(message): error_messages.append("%s:%d: %s" % (file_path, line_number + 1, message)) checker(line, file_path, reportError) evaluate_failure = self.evaluateLines(file_path, checkFormatErrors, False) if evaluate_failure is not None: error_messages.append(evaluate_failure) return error_messages def fixSourceLine(self, line, line_number): # Strip double space after '.' This may prove overenthusiastic and need to # be restricted to comments and metadata files but works for now. line = re.sub(DOT_MULTI_SPACE_REGEX, ". ", line) if self.hasInvalidAngleBracketDirectory(line): line = line.replace("<", '"').replace(">", '"') # Fix incorrect protobuf namespace references. for invalid_construct, valid_construct in PROTOBUF_TYPE_ERRORS.items(): line = line.replace(invalid_construct, valid_construct) # Use recommended cpp stdlib for invalid_construct, valid_construct in LIBCXX_REPLACEMENTS.items(): line = line.replace(invalid_construct, valid_construct) return line # We want to look for a call to condvar.waitFor, but there's no strong pattern # to the variable name of the condvar. If we just look for ".waitFor" we'll also # pick up time_system_.waitFor(...), and we don't want to return true for that # pattern. But in that case there is a strong pattern of using time_system in # various spellings as the variable name. def hasCondVarWaitFor(self, line): wait_for = line.find(".waitFor(") if wait_for == -1: return False preceding = line[0:wait_for] if preceding.endswith("time_system") or preceding.endswith("timeSystem()") or \ preceding.endswith("time_system_"): return False return True # Determines whether the filename is either in the specified subdirectory, or # at the top level. We consider files in the top level for the benefit of # the check_format testcases in tools/testdata/check_format. def isInSubdir(self, filename, *subdirs): # Skip this check for check_format's unit-tests. if filename.count("/") <= 1: return True for subdir in subdirs: if filename.startswith('./' + subdir + '/'): return True return False # Determines if given token exists in line without leading or trailing token characters # e.g. will return True for a line containing foo() but not foo_bar() or baz_foo def tokenInLine(self, token, line): index = 0 while True: index = line.find(token, index) # the following check has been changed from index < 1 to index < 0 because # this function incorrectly returns false when the token in question is the # first one in a line. The following line returns false when the token is present: # (no leading whitespace) violating_symbol foo; if index < 0: break if index == 0 or not (line[index - 1].isalnum() or line[index - 1] == '_'): if index + len(token) >= len(line) or not (line[index + len(token)].isalnum() or line[index + len(token)] == '_'): return True index = index + 1 return False def checkSourceLine(self, line, file_path, reportError): # Check fixable errors. These may have been fixed already. if line.find(". ") != -1: reportError("over-enthusiastic spaces") if self.isInSubdir(file_path, 'source', 'include') and X_ENVOY_USED_DIRECTLY_REGEX.match(line): reportError( "Please do not use the raw literal x-envoy in source code. See Envoy::Http::PrefixValue." ) if self.hasInvalidAngleBracketDirectory(line): reportError("envoy includes should not have angle brackets") for invalid_construct, valid_construct in PROTOBUF_TYPE_ERRORS.items(): if invalid_construct in line: reportError("incorrect protobuf type reference %s; " "should be %s" % (invalid_construct, valid_construct)) for invalid_construct, valid_construct in LIBCXX_REPLACEMENTS.items(): if invalid_construct in line: reportError("term %s should be replaced with standard library term %s" % (invalid_construct, valid_construct)) # Do not include the virtual_includes headers. if re.search("#include.*/_virtual_includes/", line): reportError("Don't include the virtual includes headers.") # Some errors cannot be fixed automatically, and actionable, consistent, # navigable messages should be emitted to make it easy to find and fix # the errors by hand. if not self.allowlistedForProtobufDeps(file_path): if '"google/protobuf' in line or "google::protobuf" in line: reportError("unexpected direct dependency on google.protobuf, use " "the definitions in common/protobuf/protobuf.h instead.") if line.startswith("#include <mutex>") or line.startswith("#include <condition_variable"): # We don't check here for std::mutex because that may legitimately show up in # comments, for example this one. reportError("Don't use <mutex> or <condition_variable*>, switch to " "Thread::MutexBasicLockable in source/common/common/thread.h") if line.startswith("#include <shared_mutex>"): # We don't check here for std::shared_timed_mutex because that may # legitimately show up in comments, for example this one. reportError("Don't use <shared_mutex>, use absl::Mutex for reader/writer locks.") if not self.allowlistedForRealTime(file_path) and not "NO_CHECK_FORMAT(real_time)" in line: if "RealTimeSource" in line or \ ("RealTimeSystem" in line and not "TestRealTimeSystem" in line) or \ "std::chrono::system_clock::now" in line or "std::chrono::steady_clock::now" in line or \ "std::this_thread::sleep_for" in line or self.hasCondVarWaitFor(line): reportError("Don't reference real-world time sources from production code; use injection") duration_arg = DURATION_VALUE_REGEX.search(line) if duration_arg and duration_arg.group(1) != "0" and duration_arg.group(1) != "0.0": # Matching duration(int-const or float-const) other than zero reportError( "Don't use ambiguous duration(value), use an explicit duration type, e.g. Event::TimeSystem::Milliseconds(value)" ) if not self.allowlistedForRegisterFactory(file_path): if "Registry::RegisterFactory<" in line or "REGISTER_FACTORY" in line: reportError("Don't use Registry::RegisterFactory or REGISTER_FACTORY in tests, " "use Registry::InjectFactory instead.") if not self.allowlistedForUnpackTo(file_path): if "UnpackTo" in line: reportError("Don't use UnpackTo() directly, use MessageUtil::unpackTo() instead") # Check that we use the absl::Time library if self.tokenInLine("std::get_time", line): if "test/" in file_path: reportError("Don't use std::get_time; use TestUtility::parseTime in tests") else: reportError("Don't use std::get_time; use the injectable time system") if self.tokenInLine("std::put_time", line): reportError("Don't use std::put_time; use absl::Time equivalent instead") if self.tokenInLine("gmtime", line): reportError("Don't use gmtime; use absl::Time equivalent instead") if self.tokenInLine("mktime", line): reportError("Don't use mktime; use absl::Time equivalent instead") if self.tokenInLine("localtime", line): reportError("Don't use localtime; use absl::Time equivalent instead") if self.tokenInLine("strftime", line): reportError("Don't use strftime; use absl::FormatTime instead") if self.tokenInLine("strptime", line): reportError("Don't use strptime; use absl::FormatTime instead") if self.tokenInLine("strerror", line): reportError("Don't use strerror; use Envoy::errorDetails instead") # Prefer using abseil hash maps/sets over std::unordered_map/set for performance optimizations and # non-deterministic iteration order that exposes faulty assertions. # See: https://abseil.io/docs/cpp/guides/container#hash-tables if "std::unordered_map" in line: reportError("Don't use std::unordered_map; use absl::flat_hash_map instead or " "absl::node_hash_map if pointer stability of keys/values is required") if "std::unordered_set" in line: reportError("Don't use std::unordered_set; use absl::flat_hash_set instead or " "absl::node_hash_set if pointer stability of keys/values is required") if "std::atomic_" in line: # The std::atomic_* free functions are functionally equivalent to calling # operations on std::atomic<T> objects, so prefer to use that instead. reportError("Don't use free std::atomic_* functions, use std::atomic<T> members instead.") # Block usage of certain std types/functions as iOS 11 and macOS 10.13 # do not support these at runtime. # See: https://github.com/envoyproxy/envoy/issues/12341 if self.tokenInLine("std::any", line): reportError("Don't use std::any; use absl::any instead") if self.tokenInLine("std::get_if", line): reportError("Don't use std::get_if; use absl::get_if instead") if self.tokenInLine("std::holds_alternative", line): reportError("Don't use std::holds_alternative; use absl::holds_alternative instead") if self.tokenInLine("std::make_optional", line): reportError("Don't use std::make_optional; use absl::make_optional instead") if self.tokenInLine("std::monostate", line): reportError("Don't use std::monostate; use absl::monostate instead") if self.tokenInLine("std::optional", line): reportError("Don't use std::optional; use absl::optional instead") if self.tokenInLine("std::string_view", line): reportError("Don't use std::string_view; use absl::string_view instead") if self.tokenInLine("std::variant", line): reportError("Don't use std::variant; use absl::variant instead") if self.tokenInLine("std::visit", line): reportError("Don't use std::visit; use absl::visit instead") if "__attribute__((packed))" in line and file_path != "./include/envoy/common/platform.h": # __attribute__((packed)) is not supported by MSVC, we have a PACKED_STRUCT macro that # can be used instead reportError("Don't use __attribute__((packed)), use the PACKED_STRUCT macro defined " "in include/envoy/common/platform.h instead") if DESIGNATED_INITIALIZER_REGEX.search(line): # Designated initializers are not part of the C++14 standard and are not supported # by MSVC reportError("Don't use designated initializers in struct initialization, " "they are not part of C++14") if " ?: " in line: # The ?: operator is non-standard, it is a GCC extension reportError("Don't use the '?:' operator, it is a non-standard GCC extension") if line.startswith("using testing::Test;"): reportError("Don't use 'using testing::Test;, elaborate the type instead") if line.startswith("using testing::TestWithParams;"): reportError("Don't use 'using testing::Test;, elaborate the type instead") if TEST_NAME_STARTING_LOWER_CASE_REGEX.search(line): # Matches variants of TEST(), TEST_P(), TEST_F() etc. where the test name begins # with a lowercase letter. reportError("Test names should be CamelCase, starting with a capital letter") if not self.allowlistedForSerializeAsString(file_path) and "SerializeAsString" in line: # The MessageLite::SerializeAsString doesn't generate deterministic serialization, # use MessageUtil::hash instead. reportError( "Don't use MessageLite::SerializeAsString for generating deterministic serialization, use MessageUtil::hash instead." ) if not self.allowlistedForJsonStringToMessage(file_path) and "JsonStringToMessage" in line: # Centralize all usage of JSON parsing so it is easier to make changes in JSON parsing # behavior. reportError("Don't use Protobuf::util::JsonStringToMessage, use TestUtility::loadFromJson.") if self.isInSubdir(file_path, 'source') and file_path.endswith('.cc') and \ ('.counterFromString(' in line or '.gaugeFromString(' in line or \ '.histogramFromString(' in line or '.textReadoutFromString(' in line or \ '->counterFromString(' in line or '->gaugeFromString(' in line or \ '->histogramFromString(' in line or '->textReadoutFromString(' in line): reportError("Don't lookup stats by name at runtime; use StatName saved during construction") if MANGLED_PROTOBUF_NAME_REGEX.search(line): reportError("Don't use mangled Protobuf names for enum constants") hist_m = HISTOGRAM_SI_SUFFIX_REGEX.search(line) if hist_m and not self.allowlistedForHistogramSiSuffix(hist_m.group(0)): reportError( "Don't suffix histogram names with the unit symbol, " "it's already part of the histogram object and unit-supporting sinks can use this information natively, " "other sinks can add the suffix automatically on flush should they prefer to do so.") if not self.allowlistedForStdRegex(file_path) and "std::regex" in line: reportError("Don't use std::regex in code that handles untrusted input. Use RegexMatcher") if not self.allowlistedForGrpcInit(file_path): grpc_init_or_shutdown = line.find("grpc_init()") grpc_shutdown = line.find("grpc_shutdown()") if grpc_init_or_shutdown == -1 or (grpc_shutdown != -1 and grpc_shutdown < grpc_init_or_shutdown): grpc_init_or_shutdown = grpc_shutdown if grpc_init_or_shutdown != -1: comment = line.find("// ") if comment == -1 or comment > grpc_init_or_shutdown: reportError("Don't call grpc_init() or grpc_shutdown() directly, instantiate " + "Grpc::GoogleGrpcContext. See #8282") if self.denylistedForExceptions(file_path): # Skpping cases where 'throw' is a substring of a symbol like in "foothrowBar". if "throw" in line.split(): comment_match = COMMENT_REGEX.search(line) if comment_match is None or comment_match.start(0) > line.find("throw"): reportError("Don't introduce throws into exception-free files, use error " + "statuses instead.") if "lua_pushlightuserdata" in line: reportError( "Don't use lua_pushlightuserdata, since it can cause unprotected error in call to" + "Lua API (bad light userdata pointer) on ARM64 architecture. See " + "https://github.com/LuaJIT/LuaJIT/issues/450#issuecomment-433659873 for details.") if file_path.endswith(PROTO_SUFFIX): exclude_path = ['v1', 'v2', 'generated_api_shadow'] result = PROTO_VALIDATION_STRING.search(line) if result is not None: if not any(x in file_path for x in exclude_path): reportError("min_bytes is DEPRECATED, Use min_len.") def checkBuildLine(self, line, file_path, reportError): if "@bazel_tools" in line and not (self.isStarlarkFile(file_path) or file_path.startswith("./bazel/") or "python/runfiles" in line): reportError("unexpected @bazel_tools reference, please indirect via a definition in //bazel") if not self.allowlistedForProtobufDeps(file_path) and '"protobuf"' in line: reportError("unexpected direct external dependency on protobuf, use " "//source/common/protobuf instead.") if (self.envoy_build_rule_check and not self.isStarlarkFile(file_path) and not self.isWorkspaceFile(file_path) and not self.isExternalBuildFile(file_path) and "@envoy//" in line): reportError("Superfluous '@envoy//' prefix") def fixBuildLine(self, file_path, line, line_number): if (self.envoy_build_rule_check and not self.isStarlarkFile(file_path) and not self.isWorkspaceFile(file_path) and not self.isExternalBuildFile(file_path)): line = line.replace("@envoy//", "//") return line def fixBuildPath(self, file_path): self.evaluateLines(file_path, functools.partial(self.fixBuildLine, file_path)) error_messages = [] # TODO(htuch): Add API specific BUILD fixer script. if not self.isBuildFixerExcludedFile(file_path) and not self.isApiFile( file_path) and not self.isStarlarkFile(file_path) and not self.isWorkspaceFile(file_path): if os.system("%s %s %s" % (ENVOY_BUILD_FIXER_PATH, file_path, file_path)) != 0: error_messages += ["envoy_build_fixer rewrite failed for file: %s" % file_path] if os.system("%s -lint=fix -mode=fix %s" % (BUILDIFIER_PATH, file_path)) != 0: error_messages += ["buildifier rewrite failed for file: %s" % file_path] return error_messages def checkBuildPath(self, file_path): error_messages = [] if not self.isBuildFixerExcludedFile(file_path) and not self.isApiFile( file_path) and not self.isStarlarkFile(file_path) and not self.isWorkspaceFile(file_path): command = "%s %s | diff %s -" % (ENVOY_BUILD_FIXER_PATH, file_path, file_path) error_messages += self.executeCommand(command, "envoy_build_fixer check failed", file_path) if self.isBuildFile(file_path) and (file_path.startswith(self.api_prefix + "envoy") or file_path.startswith(self.api_shadow_root + "envoy")): found = False for line in self.readLines(file_path): if "api_proto_package(" in line: found = True break if not found: error_messages += ["API build file does not provide api_proto_package()"] command = "%s -mode=diff %s" % (BUILDIFIER_PATH, file_path) error_messages += self.executeCommand(command, "buildifier check failed", file_path) error_messages += self.checkFileContents(file_path, self.checkBuildLine) return error_messages def fixSourcePath(self, file_path): self.evaluateLines(file_path, self.fixSourceLine) error_messages = [] if not file_path.endswith(DOCS_SUFFIX): if not file_path.endswith(PROTO_SUFFIX): error_messages += self.fixHeaderOrder(file_path) error_messages += self.clangFormat(file_path) if file_path.endswith(PROTO_SUFFIX) and self.isApiFile(file_path): package_name, error_message = self.packageNameForProto(file_path) if package_name is None: error_messages += error_message return error_messages def checkSourcePath(self, file_path): error_messages = self.checkFileContents(file_path, self.checkSourceLine) if not file_path.endswith(DOCS_SUFFIX): if not file_path.endswith(PROTO_SUFFIX): error_messages += self.checkNamespace(file_path) command = ("%s --include_dir_order %s --path %s | diff %s -" % (HEADER_ORDER_PATH, self.include_dir_order, file_path, file_path)) error_messages += self.executeCommand(command, "header_order.py check failed", file_path) command = ("%s %s | diff %s -" % (CLANG_FORMAT_PATH, file_path, file_path)) error_messages += self.executeCommand(command, "clang-format check failed", file_path) if file_path.endswith(PROTO_SUFFIX) and self.isApiFile(file_path): package_name, error_message = self.packageNameForProto(file_path) if package_name is None: error_messages += error_message return error_messages # Example target outputs are: # - "26,27c26" # - "12,13d13" # - "7a8,9" def executeCommand(self, command, error_message, file_path, regex=re.compile(r"^(\d+)[a|c|d]?\d*(?:,\d+[a|c|d]?\d*)?$")): try: output = subprocess.check_output(command, shell=True, stderr=subprocess.STDOUT).strip() if output: return output.decode('utf-8').split("\n") return [] except subprocess.CalledProcessError as e: if (e.returncode != 0 and e.returncode != 1): return ["ERROR: something went wrong while executing: %s" % e.cmd] # In case we can't find any line numbers, record an error message first. error_messages = ["%s for file: %s" % (error_message, file_path)] for line in e.output.decode('utf-8').splitlines(): for num in regex.findall(line): error_messages.append(" %s:%s" % (file_path, num)) return error_messages def fixHeaderOrder(self, file_path): command = "%s --rewrite --include_dir_order %s --path %s" % (HEADER_ORDER_PATH, self.include_dir_order, file_path) if os.system(command) != 0: return ["header_order.py rewrite error: %s" % (file_path)] return [] def clangFormat(self, file_path): command = "%s -i %s" % (CLANG_FORMAT_PATH, file_path) if os.system(command) != 0: return ["clang-format rewrite error: %s" % (file_path)] return [] def checkFormat(self, file_path): if file_path.startswith(EXCLUDED_PREFIXES): return [] if not file_path.endswith(SUFFIXES): return [] error_messages = [] # Apply fixes first, if asked, and then run checks. If we wind up attempting to fix # an issue, but there's still an error, that's a problem. try_to_fix = self.operation_type == "fix" if self.isBuildFile(file_path) or self.isStarlarkFile(file_path) or self.isWorkspaceFile( file_path): if try_to_fix: error_messages += self.fixBuildPath(file_path) error_messages += self.checkBuildPath(file_path) else: if try_to_fix: error_messages += self.fixSourcePath(file_path) error_messages += self.checkSourcePath(file_path) if error_messages: return ["From %s" % file_path] + error_messages return error_messages def checkFormatReturnTraceOnError(self, file_path): """Run checkFormat and return the traceback of any exception.""" try: return self.checkFormat(file_path) except: return traceback.format_exc().split("\n") def checkOwners(self, dir_name, owned_directories, error_messages): """Checks to make sure a given directory is present either in CODEOWNERS or OWNED_EXTENSIONS Args: dir_name: the directory being checked. owned_directories: directories currently listed in CODEOWNERS. error_messages: where to put an error message for new unowned directories. """ found = False for owned in owned_directories: if owned.startswith(dir_name) or dir_name.startswith(owned): found = True if not found and dir_name not in UNOWNED_EXTENSIONS: error_messages.append("New directory %s appears to not have owners in CODEOWNERS" % dir_name) def checkApiShadowStarlarkFiles(self, file_path, error_messages): command = "diff -u " command += file_path + " " api_shadow_starlark_path = self.api_shadow_root + re.sub(r"\./api/", '', file_path) command += api_shadow_starlark_path error_message = self.executeCommand(command, "invalid .bzl in generated_api_shadow", file_path) if self.operation_type == "check": error_messages += error_message elif self.operation_type == "fix" and len(error_message) != 0: shutil.copy(file_path, api_shadow_starlark_path) return error_messages def checkFormatVisitor(self, arg, dir_name, names): """Run checkFormat in parallel for the given files. Args: arg: a tuple (pool, result_list, owned_directories, error_messages) pool and result_list are for starting tasks asynchronously. owned_directories tracks directories listed in the CODEOWNERS file. error_messages is a list of string format errors. dir_name: the parent directory of the given files. names: a list of file names. """ # Unpack the multiprocessing.Pool process pool and list of results. Since # python lists are passed as references, this is used to collect the list of # async results (futures) from running checkFormat and passing them back to # the caller. pool, result_list, owned_directories, error_messages = arg # Sanity check CODEOWNERS. This doesn't need to be done in a multi-threaded # manner as it is a small and limited list. source_prefix = './source/' full_prefix = './source/extensions/' # Check to see if this directory is a subdir under /source/extensions # Also ignore top level directories under /source/extensions since we don't # need owners for source/extensions/access_loggers etc, just the subdirectories. if dir_name.startswith(full_prefix) and '/' in dir_name[len(full_prefix):]: self.checkOwners(dir_name[len(source_prefix):], owned_directories, error_messages) for file_name in names: if dir_name.startswith("./api") and self.isStarlarkFile(file_name): result = pool.apply_async(self.checkApiShadowStarlarkFiles, args=(dir_name + "/" + file_name, error_messages)) result_list.append(result) result = pool.apply_async(self.checkFormatReturnTraceOnError, args=(dir_name + "/" + file_name,)) result_list.append(result) # checkErrorMessages iterates over the list with error messages and prints # errors and returns a bool based on whether there were any errors. def checkErrorMessages(self, error_messages): if error_messages: for e in error_messages: print("ERROR: %s" % e) return True return False if __name__ == "__main__": parser = argparse.ArgumentParser(description="Check or fix file format.") parser.add_argument("operation_type", type=str, choices=["check", "fix"], help="specify if the run should 'check' or 'fix' format.") parser.add_argument( "target_path", type=str, nargs="?", default=".", help="specify the root directory for the script to recurse over. Default '.'.") parser.add_argument("--add-excluded-prefixes", type=str, nargs="+", help="exclude additional prefixes.") parser.add_argument("-j", "--num-workers", type=int, default=multiprocessing.cpu_count(), help="number of worker processes to use; defaults to one per core.") parser.add_argument("--api-prefix", type=str, default="./api/", help="path of the API tree.") parser.add_argument("--api-shadow-prefix", type=str, default="./generated_api_shadow/", help="path of the shadow API tree.") parser.add_argument("--skip_envoy_build_rule_check", action="store_true", help="skip checking for '@envoy//' prefix in build rules.") parser.add_argument("--namespace_check", type=str, nargs="?", default="Envoy", help="specify namespace check string. Default 'Envoy'.") parser.add_argument("--namespace_check_excluded_paths", type=str, nargs="+", default=[], help="exclude paths from the namespace_check.") parser.add_argument("--build_fixer_check_excluded_paths", type=str, nargs="+", default=[], help="exclude paths from envoy_build_fixer check.") parser.add_argument("--include_dir_order", type=str, default=",".join(common.includeDirOrder()), help="specify the header block include directory order.") args = parser.parse_args() if args.add_excluded_prefixes: EXCLUDED_PREFIXES += tuple(args.add_excluded_prefixes) format_checker = FormatChecker(args) # Check whether all needed external tools are available. ct_error_messages = format_checker.checkTools() if format_checker.checkErrorMessages(ct_error_messages): sys.exit(1) # Returns the list of directories with owners listed in CODEOWNERS. May append errors to # error_messages. def ownedDirectories(error_messages): owned = [] maintainers = [ '@mattklein123', '@htuch', '@alyssawilk', '@zuercher', '@lizan', '@snowp', '@asraa', '@yavlasov', '@junr03', '@dio', '@jmarantz', '@antoniovicente' ] try: with open('./CODEOWNERS') as f: for line in f: # If this line is of the form "extensions/... @owner1 @owner2" capture the directory # name and store it in the list of directories with documented owners. m = EXTENSIONS_CODEOWNERS_REGEX.search(line) if m is not None and not line.startswith('#'): owned.append(m.group(1).strip()) owners = re.findall('@\S+', m.group(2).strip()) if len(owners) < 2: error_messages.append("Extensions require at least 2 owners in CODEOWNERS:\n" " {}".format(line)) maintainer = len(set(owners).intersection(set(maintainers))) > 0 if not maintainer: error_messages.append("Extensions require at least one maintainer OWNER:\n" " {}".format(line)) return owned except IOError: return [] # for the check format tests. # Calculate the list of owned directories once per run. error_messages = [] owned_directories = ownedDirectories(error_messages) if os.path.isfile(args.target_path): error_messages += format_checker.checkFormat("./" + args.target_path) else: results = [] def PooledCheckFormat(path_predicate): pool = multiprocessing.Pool(processes=args.num_workers) # For each file in target_path, start a new task in the pool and collect the # results (results is passed by reference, and is used as an output). for root, _, files in os.walk(args.target_path): format_checker.checkFormatVisitor((pool, results, owned_directories, error_messages), root, [f for f in files if path_predicate(f)]) # Close the pool to new tasks, wait for all of the running tasks to finish, # then collect the error messages. pool.close() pool.join() # We first run formatting on non-BUILD files, since the BUILD file format # requires analysis of srcs/hdrs in the BUILD file, and we don't want these # to be rewritten by other multiprocessing pooled processes. PooledCheckFormat(lambda f: not format_checker.isBuildFile(f)) PooledCheckFormat(lambda f: format_checker.isBuildFile(f)) error_messages += sum((r.get() for r in results), []) if format_checker.checkErrorMessages(error_messages): print("ERROR: check format failed. run 'tools/code_format/check_format.py fix'") sys.exit(1) if args.operation_type == "check": print("PASS")
2
0.654855
0.981483
All data sets are licensed under a Creative Commons Attribution 4.0 International License (CC BY 4). Per the CC BY 4 license it is understood that any use of the data set will properly acknowledge the individual(s) listed above using the suggested data citation. If you wish to use this data set, it is highly recommended that you contact the original principal investigator(s) (PI). Should the relevant PI be unavailable, please contact BCO-DMO ([email protected]) for additional guidance. For general guidance please see the BCO-DMO Terms of Use document. This dataset reports initial community conditions in Kane'ohe Bay including temperature, salinity, chlorophyll and naupliar abundance of two species of calanoid copepods, Parvocalanus crassirostris and Bestiolina similis as measured by microscopic counts and qPCR. These data are published in MEPS (2017) and are the result of M. Jungbluth's Ph.D. thesis work. Naupliar abundances of the 2 target species in situ were estimated using a quantitative polymerase chain reaction (qPCR)-based method (Jungbluth et al. 2013), as well as microscopic counts of calanoid and cyclopoid nauplii. The qPCR-based method allows application of individual species grazing rates to in situ abundances to estimate the total potential grazing impact of each species. Samples were collected by duplicate vertical microplankton net tows (0.5 m diameter ring net, 63 µm mesh) from near bottom (10 m depth) to the surface with a low speed flow meter (General Oceanics). The contents of each net were split quantitatively. One half was size-fractionated through a series of 5 Nitex sieves (63, 75, 80, 100, and 123 µm) to separate size groups of nauplii from later developmental stages, and each was preserved in 95% non-denatured ethyl alcohol (EtOH). The second half of the sample was preserved immediately in 95% EtOH for counts of total calanoid and total cyclopoid nauplii, which were used for comparison to the qPCR-based results of the abundance of each calanoid species. All samples were stored on ice in the field until being transferred to a -20°C freezer in the laboratory. EtOH in the sample bottles was replaced with fresh EtOH within 12 to 24 h of collection to ensure high-quality DNA for analysis (Bucklin 2000). The 3 smallest plankton size fractions from the net collection were analyzed with qPCR to enumerate P. crassirostris and B. similis nauplius abundances (Jungbluth et al. 2013). In brief, DNA was extracted from 3 plankton size fractions (63, 75, and 80 µm) using a modified QIAamp Mini Kit procedure (Qiagen). The total number of DNA copies in each sample was then measured using species-specific DNA primers and qPCR protocols (Jungbluth et al. 2013). On each qPCR plate, 4 to 5 standards spanning 4 to 5 orders of magnitude in DNA copy number were run along with the 2 biological replicates of a size fraction for each sampling date along with a no template control (NTC), all in triplicate. A range of 0.04 to 1 ng µl-1 of total DNA per sample was measured on each plate ensuring that the range of standards encompassed the amplification range of samples, with equal total DNA concentrations run in each well on individual plates. In all cases, amplification efficiencies ranged from 92 to 102%, and melt-curves indicated amplification of only the target species. The qPCR estimate of each species' mitochondrial cytochrome oxidase c subunit I (COI) DNA copy number was converted to an estimate of nauplius abundance using methods described in Jungbluth et al. (2013). Conditions Salinity and temperature in the field were measured using a YSI 6600V2 sonde prior to collecting water for bottle incubations. For chl a, triplicate 305 ml samples were filtered onto GF/Fs (Whatman), flash-frozen (LN2), and kept at -80°C freezer until measurements were made 4 mo later. Chl a (and phaeopigment) was measured using a Turner Designs (model 10AU) fluorometer, using the standard extraction and acidification technique (Yentsch & Menzel 1963, Strickland & Parsons 1972). General term for a sensor that quantifies the rate at which fluids (e.g. water or air) pass through sensor packages, instruments, or sampling devices. A flow meter may be mechanical, optical, electromagnetic, etc. Instruments that generate enlarged images of samples using the phenomena of reflection and absorption of visible light. Includes conventional and inverted instruments. Also called a "light microscope".
2
1.491505
0.819092
COMPASSIONATE RELEASEforStanley G. Rothenberg We, the undersigned, ask the Bureau of Prisons to request Compassionate Release on the following grounds: First, it is fundamentally unfair to sentence a 64-year-old man to a life sentence in federal prison for talking dirty on the Internet. Second, the egregious state of medical care provided in prisons leads to suffering far out of proportion to the sentence. Third, there is overwhelming evidence that Mr. Rothenberg is not a danger to society and that he never actually intended to engage in sexual conduct with a child. Background Mr. Rothenberg has been an openly-gay man his entire life. At age 64, he was disabled by chronic back problems and chronic life-long anxiety, as well as a long-term dependence on prescription benzodiazepine drugs. After losing his life partner to AIDS, Mr. Rothenberg turned to Internet sex chat rooms for entertainment. He engaged in a number of conversations with many people in the chat rooms, including some as “private messages.” It was in the AOL Family Luv chat room that he encountered a police officer who posed as a father who “shared” his handicapped eleven-year-old daughter with “friends.” There was no child. Mr. Rothenberg has never had — and has never been charged with — any actual sexual contact with a minor. However, Mr. Rothenberg was in possession of child pornography, which he disclosed to police officers after his arrest and, in fact, told them where to locate the thumb drive holding the pictures. He had that in his possession in order to prove his bona fides. While some might doubt that claim, the very nature of the material on the thumb drive proves it. The pictures were of a wide range of ages, and of both male and female children. Anyone experienced with true pedophiles knows that they normally gravitate to specific genders and ages. This was clearly a collection meant to impress others rather than for personal use. The law, however, does not make that distinction, and Mr. Rothenberg accepts that and acknowledges that under current law, possessing those pictures was unlawful. Mr. Rothenberg accepted complete responsibility for possession of the material and entered a guilty plea. He was subsequently sentenced to 25 years in prison. The sentence for possession of the pictures was 10 years. Sentencing Mr. Rothenberg to a life sentence for “talking dirty on the Internet” is fundamentally unfair. There is no evidence that he ever even spoke to a child in a lascivious manner, much less touched one inappropriately. Not once. However, the court found a pattern of conduct based on his participation in the chat rooms. Furthermore, the police officer specifically created the imaginary child’s biography to invoke enhancements to the sentencing guidelines. If the victim is under the age of 12 or the victim is handicapped, the sentence is increased. A life sentence for a non-contact offense against a child who does not exist is fundamentally unjust. Mr. Rothenberg was a 64-year-old man with no history of criminal conduct — in fact, with a lifetime of public service, charity fundraising, and a successful business career. When he signed the Change of Plea form, Mr. Rothenberg was undergoing serious withdrawal from a lifetime use of prescription benzodiazepines. Numerous psychiatric records document that fact. There is no question that these medications were obtained legally, were not abused, and that his use was always monitored by a physician. Mr. Rothenberg poses no danger to society and experts unanimously agree he is not a pedophile. His sole true offense was possessing child pornography, a fact that he immediately admitted and even told the officers where to find it. The sentence for possessing those images would be ten years. Mr. Rothenberg has been in prison since 2008 and will not be released until 2033. Psychiatric reports indicate that the probability that he will “reoffend” is minimal and that he is not a pedophile. We respectfully ask the Court to grant Mr. Rothenberg a Compassionate Release.
1
0.509143
0.600337
My desire is to grapple together here over how well off we are with God through Christ, and to live from His opinion of us. Tuesday, March 08, 2011 Catch The Whompers The obsolete arrangement between God and man (the Old Covenant) was never Christian—not even close. Not even. If now we make any attempt to wed it to the new and current arrangement by our efforts, our hopes or our expectations of God, we’re binding ourselves to frustration and confusion. If frustration and confusion are whomping on your life just now, consider your covenant. Trying to have them both means you’ll enjoy neither, let alone God. It would be like trying to mate a horse and a car and hoping to get somewhere with it (worse than the picture, though the exhaust system would be awful). There is no fit. It's crazy. If you're going to actually enjoy and truly like God, you've got to pay attention and catch the whompers. (I’m bothered by what this has done to the sons and daughters of God in relation to “hope in the Lord,” so I’ll write more soon. And if you weren't aware, I've got a lot to say about all this in my just-released book. Find out more at: http://lifecourse.org/Ralphs_Book.html)
1
0.405283
0.105904
Q: How do I stop IntelliJ searching for Incoming SVN Changes? My IntelliJ IDE (12.1.4) periodically searches for incoming changes in my connected SVN repositories. When I first installed IntelliJ these incoming changes weren't searched for automatically - if I remember correctly I had to click on the refresh button in the Incoming sub-tab within the Changes tab and set some options. I can't seem to know switch this off. Collecting information on changes seems to cause performance issues for me - maybe due to the remote location of the repository. Can't see any options in the system preferences, and clicking refresh, refreshes! In summary - does anyone know how to stop Intellij collecting information on SVN changes? A: Sure, like this: Go to the same place as where you turned the automatic refresh feature on (the version control pane, marked by 9: Changes, and then the Repository tab) Hit the red X to Clear the VCS history cache (note: this won't delete anything important!) Hit the first icon with 2 circular blue arrows to Refresh the history, and now untick Refresh changes every checkbox and hit OK The VCS history cache will be now refreshed once, but not periodically - refresh manually as needed. And you're done!
1
1.101027
0.725977
Q: Second quantization, creation and annihilation operators I found two notions of states for second quantization. One representation uses occupation numbers here, for example Another one creates the n+1 th particle in a collection of n existent states. see for instance here. Now, the problem is that in the first case the creation operator does $a_k^{\dagger} |N_1,N_2,..\rangle = \sqrt{N_k+1 } |N_1,N_2,..,N_{k}+1,..\rangle$ and in the latter case $a_k^{\dagger} |n\rangle = \sqrt{n+1 } |n+1 \rangle.$ So the action of this operator is very different depending on whether you write down the states in terms of their occupation number or whether you write them in terms of the ensemble of all the existing states. Unfortunately, I just don't get how these two pictures are related to each other. If anything is unclear, please let me know. A: @Xin Wang's last comment: In the first case you are simply, formally, looking at collection of k_max different, uncoupled oscillators. But you're only doing anything with the k'th one. k is an index in this case, nothing else but giving this specific oscillator a name. In the second case you only have one oscillator in your notation, so actually you don't need to give the annihilation operator an index, as it is implicitly fixed. It is acutally even clumsy, since you're not giving the corresponding occupation number variable n the same index. Your question may be a semantic issue, but since you're not doing anything with all other but the k'th oscillator, their particle number will be fixed during the operation. It's just a definition to count the 'total particle number' by adding up all n_m.
2
0.868602
0.766993
Q: Multiplying row in NumPy array by specific values based on another row I have the following list: ls = [[1,2,3], [3,4] , [5] , [7,8], [23], [90, 81]] This is my numpy array: array([[ 1, 0, 4, 3], [ 10, 100, 1000, 10000]]) I need to multiply the values in the second row of my array by the length of the list in ls which is at the index of the corresponding number in the first row: 10 * len(ls[1]) & 100 * len(ls[0]) etc.. The objective output would be this array: array([[ 1, 0, 4, 3], [ 20, 300, 1000, 20000]]) Any efficient way doing this? A: Use list comprehesion to find lengths and multiply it with 2nd row of array as: ls = [[1,2,3], [3,4] , [5] , [7,8]] arr = np.array([[ 1, 0, 2, 3], [ 10, 100, 1000, 10000]]) arr[1,:] = arr[1,:]*([len(l) for l in ls]) arr array([[ 1, 0, 2, 3], [ 30, 200, 1000, 20000]]) EDIT : arr[1,:] = arr[1,:]*([len(ls[l]) for l in arr[0,:]]) arr array([[ 1, 0, 2, 3], [ 20, 300, 1000, 20000]])
2
0.828531
0.99878
Shortcuts Keyboard navigation in the search filter is done by using a combination of the TAB, ENTER, and ARROW keys. Start by pressing the TAB key to enter the filter module. Use the arrow keys to move between tabs. To select a desired tab, use the TAB key. Prime Minister's Office Ministry of Culture Ministry of Defence Ministry of Education and Research Ministry of Employment Ministry of Enterprise and Innovation Ministry of the Environment Ministry of Finance Ministry for Foreign Affairs Ministry of Health and Social Affairs Ministry of Justice Government Government Offices Select time period, enter the date using the format YYYY-MM-DD or select from the calendar that appears when you select the input field Reinstated border control at Sweden’s internal border The Government has decided to reinstate internal border control for three months. The decision is based on the Government’s assessment that there is still a threat to public policy and internal security. The Government today appointed 31 state secretaries at the Government Offices. Former state secretaries have been dismissed from their positions. Most of the state secretaries have previously held corresponding positions at the Government Offices. Government invests in space – Esrange to have testbed The Esrange Space Centre should remain a strategic resource for national and international research, and the Government and the Swedish Space Corporation (SSC) are therefore investing SEK 80 million in a new test facility at the centre in Kiruna. Decision on application from Nord Stream 2 AG The Government today granted permission for the delineation of the course proposed by Nord Stream 2 AG for the laying of two pipelines on the continental shelf in the Swedish Exclusive Economic zone in the Baltic Sea. Sweden and India agree to deepen their innovation cooperation Sweden and India today signed a joint innovation partnership to deepen the collaboration between the two countries and contribute to sustainable growth and new job opportunities. The partnership was signed in connection with Indian Prime Minister Narendra Modi’s visit to Stockholm. The Prime Minister, together with the EU Commission President Jean-Claude Juncker has invited to a social summit focusing on the promotion of Fair Jobs and growth, in Gothenburg on Friday 17 November. Heads of State and heads of Governments together with other EU-member ministers will be in place.
1
1.006766
0.126224
South Wayne Historic District South Wayne Historic District may refer to: South Wayne Historic District (Fort Wayne, Indiana), listed on the National Register of Historic Places in Allen County, Indiana South Wayne Historic District (Wayne, Pennsylvania), listed on the National Register of Historic Places in Delaware County, Pennsylvania
1
1.075571
0.99721
Managing hepatitis B coinfection in HIV-infected patients. Since viral hepatitis is one of the most common causes of morbidity and mortality in HIV, it is critical to recognize and treat these patients appropriately. Hepatitis B infection is particularly difficult to manage as it changes with shifts in immune status. Inactive infection may flare up with restoration of CD4 cell count. In addition, many drugs used to treat HIV are also active against hepatitis B. Thus, patients may require therapy for both diseases or only for hepatitis B. The practicing physician must be aware of which drug to use with antiretrovirals and which can be used for hepatitis B alone. Current therapies for HIV that have hepatitis B activity include lamivudine, emtricitabine, and tenofovir. Therapies for hepatitis B without HIV activity are adefovir and entecavir. The major advances in the past year include emerging data on epidemiology, occult infection, genotypes, and newer therapies. Long-term management of hepatitis B includes monitoring for hepatocellular carcinoma. Two recent consensus conferences have provided excellent reviews of management of coinfection .
3
1.895853
0.77583
/* Generated by RuntimeBrowser Image: /System/Library/PrivateFrameworks/AppleServiceToolkit.framework/AppleServiceToolkit */ @interface ASTMaterializedConnectionManager : NSObject <ASTConnectionManager, ASTConnectionStatusDelegate> { <ASTConnectionManagerDelegate> * _delegate; ASTIdentity * _identity; ASTNetworking * _networking; NSString * _sessionId; } @property (readonly, copy) NSString *debugDescription; @property (nonatomic) <ASTConnectionManagerDelegate> *delegate; @property (readonly, copy) NSString *description; @property (readonly) unsigned long long hash; @property (nonatomic, retain) ASTIdentity *identity; @property (nonatomic, retain) ASTNetworking *networking; @property (nonatomic, retain) NSString *sessionId; @property (readonly) Class superclass; - (void).cxx_destruct; - (void)cancelAllTestResults; - (void)connection:(id)arg1 connectionStateChanged:(long long)arg2; - (void)connection:(id)arg1 didSendBodyData:(long long)arg2 totalBytesSent:(long long)arg3 totalBytesExpected:(long long)arg4; - (void)dealloc; - (id)delegate; - (void)downloadAsset:(id)arg1 destinationFileHandle:(id)arg2 allowsCellularAccess:(bool)arg3 completion:(id /* block */)arg4; - (id)identity; - (id)init; - (id)initWithSOCKSProxyServer:(id)arg1 port:(id)arg2; - (id)networking; - (bool)postAuthInfo:(id)arg1 allowsCellularAccess:(bool)arg2; - (id)postEnrollAllowingCellularAccess:(bool)arg1; - (bool)postProfile:(id)arg1 allowsCellularAccess:(bool)arg2; - (id)postRequest:(id)arg1 allowsCellularAccess:(bool)arg2; - (void)postSealableFile:(id)arg1 fileSequence:(id)arg2 totalFiles:(id)arg3 testId:(id)arg4 dataId:(id)arg5 allowsCellularAccess:(bool)arg6 completion:(id /* block */)arg7; - (void)postSessionExistsForIdentities:(id)arg1 ticket:(id)arg2 timeout:(double)arg3 allowsCellularAccess:(bool)arg4 completion:(id /* block */)arg5; - (void)postTestResult:(id)arg1 allowsCellularAccess:(bool)arg2 completion:(id /* block */)arg3; - (id)sessionId; - (void)setDelegate:(id)arg1; - (void)setIdentity:(id)arg1; - (void)setNetworking:(id)arg1; - (void)setSessionId:(id)arg1; @end
1
0.86342
0.265376
<?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://xmlns.jcp.org/xml/ns/javaee" xmlns:javaee="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified" version="2.3"> <xsd:annotation> <xsd:documentation> DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. Copyright (c) 2009-2013 Oracle and/or its affiliates. All rights reserved. The contents of this file are subject to the terms of either the GNU General Public License Version 2 only ("GPL") or the Common Development and Distribution License("CDDL") (collectively, the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the License at https://glassfish.dev.java.net/public/CDDL+GPL_1_1.html or packager/legal/LICENSE.txt. See the License for the specific language governing permissions and limitations under the License. When distributing the software, include this License Header Notice in each file and include the License file at packager/legal/LICENSE.txt. GPL Classpath Exception: Oracle designates this particular file as subject to the "Classpath" exception as provided by Oracle in the GPL Version 2 section of the License file that accompanied this code. Modifications: If applicable, add the following below the License Header, with the fields enclosed by brackets [] replaced by your own identifying information: "Portions Copyright [year] [name of copyright owner]" Contributor(s): If you wish your version of this file to be governed by only the CDDL or only the GPL Version 2, indicate your decision by adding "[Contributor] elects to include this software in this distribution under the [CDDL or GPL Version 2] license." If you don't indicate a single choice of license, a recipient has the option to distribute your version of this file under either the CDDL, the GPL Version 2 or to extend the choice of license to its licensees as provided above. However, if you add GPL Version 2 code and therefore, elected the GPL Version 2 license, then the option applies only if the new code is made subject to such option by the copyright holder. </xsd:documentation> </xsd:annotation> <xsd:annotation> <xsd:documentation> The Apache Software Foundation elects to include this software under the CDDL license. </xsd:documentation> </xsd:annotation> <xsd:annotation> <xsd:documentation> This is the XML Schema for the JSP 2.3 deployment descriptor types. The JSP 2.3 schema contains all the special structures and datatypes that are necessary to use JSP files from a web application. The contents of this schema is used by the web-common_3_1.xsd file to define JSP specific content. </xsd:documentation> </xsd:annotation> <xsd:annotation> <xsd:documentation> The following conventions apply to all Java EE deployment descriptor elements unless indicated otherwise. - In elements that specify a pathname to a file within the same JAR file, relative filenames (i.e., those not starting with "/") are considered relative to the root of the JAR file's namespace. Absolute filenames (i.e., those starting with "/") also specify names in the root of the JAR file's namespace. In general, relative names are preferred. The exception is .war files where absolute names are preferred for consistency with the Servlet API. </xsd:documentation> </xsd:annotation> <xsd:include schemaLocation="javaee_7.xsd"/> <!-- **************************************************** --> <xsd:complexType name="jsp-configType"> <xsd:annotation> <xsd:documentation> The jsp-configType is used to provide global configuration information for the JSP files in a web application. It has two subelements, taglib and jsp-property-group. </xsd:documentation> </xsd:annotation> <xsd:sequence> <xsd:element name="taglib" type="javaee:taglibType" minOccurs="0" maxOccurs="unbounded"/> <xsd:element name="jsp-property-group" type="javaee:jsp-property-groupType" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence> <xsd:attribute name="id" type="xsd:ID"/> </xsd:complexType> <!-- **************************************************** --> <xsd:complexType name="jsp-fileType"> <xsd:annotation> <xsd:documentation> The jsp-file element contains the full path to a JSP file within the web application beginning with a `/'. </xsd:documentation> </xsd:annotation> <xsd:simpleContent> <xsd:restriction base="javaee:pathType"/> </xsd:simpleContent> </xsd:complexType> <!-- **************************************************** --> <xsd:complexType name="jsp-property-groupType"> <xsd:annotation> <xsd:documentation> The jsp-property-groupType is used to group a number of files so they can be given global property information. All files so described are deemed to be JSP files. The following additional properties can be described: - Control whether EL is ignored. - Control whether scripting elements are invalid. - Indicate pageEncoding information. - Indicate that a resource is a JSP document (XML). - Prelude and Coda automatic includes. - Control whether the character sequence #{ is allowed when used as a String literal. - Control whether template text containing only whitespaces must be removed from the response output. - Indicate the default contentType information. - Indicate the default buffering model for JspWriter - Control whether error should be raised for the use of undeclared namespaces in a JSP page. </xsd:documentation> </xsd:annotation> <xsd:sequence> <xsd:group ref="javaee:descriptionGroup"/> <xsd:element name="url-pattern" type="javaee:url-patternType" maxOccurs="unbounded"/> <xsd:element name="el-ignored" type="javaee:true-falseType" minOccurs="0"> <xsd:annotation> <xsd:documentation> Can be used to easily set the isELIgnored property of a group of JSP pages. By default, the EL evaluation is enabled for Web Applications using a Servlet 2.4 or greater web.xml, and disabled otherwise. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="page-encoding" type="javaee:string" minOccurs="0"> <xsd:annotation> <xsd:documentation> The valid values of page-encoding are those of the pageEncoding page directive. It is a translation-time error to name different encodings in the pageEncoding attribute of the page directive of a JSP page and in a JSP configuration element matching the page. It is also a translation-time error to name different encodings in the prolog or text declaration of a document in XML syntax and in a JSP configuration element matching the document. It is legal to name the same encoding through multiple mechanisms. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="scripting-invalid" type="javaee:true-falseType" minOccurs="0"> <xsd:annotation> <xsd:documentation> Can be used to easily disable scripting in a group of JSP pages. By default, scripting is enabled. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="is-xml" type="javaee:true-falseType" minOccurs="0"> <xsd:annotation> <xsd:documentation> If true, denotes that the group of resources that match the URL pattern are JSP documents, and thus must be interpreted as XML documents. If false, the resources are assumed to not be JSP documents, unless there is another property group that indicates otherwise. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="include-prelude" type="javaee:pathType" minOccurs="0" maxOccurs="unbounded"> <xsd:annotation> <xsd:documentation> The include-prelude element is a context-relative path that must correspond to an element in the Web Application. When the element is present, the given path will be automatically included (as in an include directive) at the beginning of each JSP page in this jsp-property-group. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="include-coda" type="javaee:pathType" minOccurs="0" maxOccurs="unbounded"> <xsd:annotation> <xsd:documentation> The include-coda element is a context-relative path that must correspond to an element in the Web Application. When the element is present, the given path will be automatically included (as in an include directive) at the end of each JSP page in this jsp-property-group. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="deferred-syntax-allowed-as-literal" type="javaee:true-falseType" minOccurs="0"> <xsd:annotation> <xsd:documentation> The character sequence #{ is reserved for EL expressions. Consequently, a translation error occurs if the #{ character sequence is used as a String literal, unless this element is enabled (true). Disabled (false) by default. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="trim-directive-whitespaces" type="javaee:true-falseType" minOccurs="0"> <xsd:annotation> <xsd:documentation> Indicates that template text containing only whitespaces must be removed from the response output. It has no effect on JSP documents (XML syntax). Disabled (false) by default. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="default-content-type" type="javaee:string" minOccurs="0"> <xsd:annotation> <xsd:documentation> The valid values of default-content-type are those of the contentType page directive. It specifies the default response contentType if the page directive does not include a contentType attribute. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="buffer" type="javaee:string" minOccurs="0"> <xsd:annotation> <xsd:documentation> The valid values of buffer are those of the buffer page directive. It specifies if buffering should be used for the output to response, and if so, the size of the buffer to use. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="error-on-undeclared-namespace" type="javaee:true-falseType" minOccurs="0"> <xsd:annotation> <xsd:documentation> The default behavior when a tag with unknown namespace is used in a JSP page (regular syntax) is to silently ignore it. If set to true, then an error must be raised during the translation time when an undeclared tag is used in a JSP page. Disabled (false) by default. </xsd:documentation> </xsd:annotation> </xsd:element> </xsd:sequence> <xsd:attribute name="id" type="xsd:ID"/> </xsd:complexType> <!-- **************************************************** --> <xsd:complexType name="taglibType"> <xsd:annotation> <xsd:documentation> The taglibType defines the syntax for declaring in the deployment descriptor that a tag library is available to the application. This can be done to override implicit map entries from TLD files and from the container. </xsd:documentation> </xsd:annotation> <xsd:sequence> <xsd:element name="taglib-uri" type="javaee:string"> <xsd:annotation> <xsd:documentation> A taglib-uri element describes a URI identifying a tag library used in the web application. The body of the taglib-uri element may be either an absolute URI specification, or a relative URI. There should be no entries in web.xml with the same taglib-uri value. </xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="taglib-location" type="javaee:pathType"> <xsd:annotation> <xsd:documentation> the taglib-location element contains the location (as a resource relative to the root of the web application) where to find the Tag Library Description file for the tag library. </xsd:documentation> </xsd:annotation> </xsd:element> </xsd:sequence> <xsd:attribute name="id" type="xsd:ID"/> </xsd:complexType> </xsd:schema>
1
0.415872
0.367839
In the UK, a charity that houses unwanted horses says it is being inundated with calls from equestrians who can no longer afford to keep their horses. The Horse Trust has received 640 requests to retire horses in the past month. Mark Worthington reports: Far from the city, an empty paddock and nothing but memories. After 15 years Shelagh Ball was forced to say goodbye to her beloved horse. This is what happens when the downturn starts to bite. SHELAGH BALL:The reason I've had to give up Fred is economics. Solely and purely economics. And the effect it's had on me is devastating. I mean I… just heartbreaking. I love that horse and if I could afford to keep him, I would for the rest of his life. But I can't. Shelagh isn't alone. Horse charities say record numbers are struggling to pay the bills. First came huge rises in costs - the price of feed doubled. Now there is less money around to pay for it all, and that's hitting businesses too. Garron Baines had already given up one horse through sickness. Now he's shutting down his horse-trekking company, meaning six more need new homes. GARRON BAINES:The horse business, or the horse leisure riding has fallen off the cliff in the last few weeks as people have looked at their household budgets and decided that it's too expensive to go horse-riding. And at the same time costs have been mounting significantly over the last year. It all means more work for those who care for unwanted animals. But charities fear this is only the beginning and that donations may begin to dry up just as huge numbers of horses need their help.
2
0.639918
0.140256
Apr 26, 2012; Eden Prairie, MN, USA; Minnesota Vikings general manager Rick Spielman talks with the media after the introduction of the 2013 1st round draft picks at Winter Park. Mandatory Credit: Bruce Kluckhohn-USA TODAY Sports Veteran scribes Sid Hartman and Peter King are on the same page on this one. When the Vikings’ turn comes to draft on the evening of May 8, the selection will be anything but a quarterback. Rick Spielman has spoken to both King and Hartman, and apparently convinced each man that the plan is to go BPA at 8 and take a quarterback later. “While there is much speculation that the Vikings will select a quarterback with the No. 8 overall pick in the NFL draft, General Manager Rick Spielman made it clear that he won’t draft a QB with the pick because he said they will take the best player on the board with their first selection, and there is no reason to believe that a quarterback will be the best player on the board,” Hartman said in a Sunday column. The estimable Mr. Hartman quoted Spielman explaining why the Vikings would do well to pass on a QB this year and address other needs. “There are some very good defensive players, some very good receivers in this draft, some good offensive linemen,” Spielman said. “There’s some significant linebackers that can play not only standing up but also help you rush the passer as well. I think we’re going to have a lot of options at 8, but we’re also going to potentially look to move out of that pick as well.” Peter King’s MMQB segment on Spielman included similar quotes. “That’s a big reason why we made it a high priority to sign Matt Cassel back. Every one of these quarterbacks … nothing is a sure thing,” Spielman told King. “There’s no Andrew Luck, no Peyton Manning. It is such a mixed bag with each player—every one of them has positives, every one of them has negatives. And if that’s the way you end up feeling, why don’t you just wait till later in the draft, and take someone with the first pick you’re sure will help you right now?” In the same piece, King pointed out that the Vikings will have a minicamp days before the draft, and indicated that Minnesota will use that minicamp to get a read on where Matt Cassel and Christian Ponder both are. The implication being that the Vikings could still elect to draft a quarterback at 8, if they become convinced that their present QBs aren’t good enough. Despite Spielman leaving the door open on taking a QB at 8 if their on-roster QBs stink enough, King told a Twin Cities media personality that he thinks he thinks he knows the Vikings will go away from QB at 8. In a tweet to Meatsauce responding to a question about what the Vikings will do at 8 King said, “Not a quarterback. They want a sure thing.” Straight from the keyboard of King and the quill pen of Sid Hartman. No quarterback for the Vikings at 8 this year. So Johnny Manziel, Blake Bortles, Teddy Bridgewater, any other quarterbacks who think they have a chance of being taken #8 overall? You can cancel that order for purple apparel, you can call off that Twin Cities area house search, you can delete all those sweet Minneapolis honies from your phone. Minnesota ain’t gonna happen for you. Memo to any teams expecting the Vikings to take a QB at 8? Listen to Sid Hartman and Peter King. It’s not going to happen. So submit your Ha Ha Clinton-Dix/Aaron Donald/C.J. Mosley/Jake Matthews/Odell Beckham-related trade proposals now. Like The Viking Age on Facebook. Follow TVA on Twitter. Subscribe to the Fansided Daily Newsletter. Sports news all up in your inbox.
1
0.682566
0.07197
Personal Statement Our team includes experienced and caring professionals who share the belief that our care should be comprehensive and courteous - responding fully to your individual needs and preferences....more Our team includes experienced and caring professionals who share the belief that our care should be comprehensive and courteous - responding fully to your individual needs and preferences. More about Dr. Krishnamurthy.C.V. Dr. Krishnamurthy.C.V. is a popular General Physician in Ganga Nagar, Bangalore. He studied and completed MBBS . You can consult Dr. Krishnamurthy.C.V. at Aryan Multispeciality Hospital in Ganga Nagar, Bangalore. You can book an instant appointment online with Dr. Krishnamurthy.C.V. on Lybrate.com. Find numerous General Physicians in India from the comfort of your home on Lybrate.com. You will find General Physicians with more than 30 years of experience on Lybrate.com. You can find General Physicians online in Bangalore and from across India. View the profile of medical specialists and their reviews from other patients to make an informed decision. You can take that. You can also take other products of your choice. You should try Homeopathy for the acid reflux as it can help heal you naturally. A detailed case history is essential to analyse your case and select a remedy which suits your constitution. A proper diet (a balanced diet) which is healthy is very important. Avoid all junk food and outside food. Have fruits and vegetables everyday. You should also start doing Yoga as it can enhance the healing process. You can contact me online for a private consultation. take good diet like fresh fruits dry fruits specially dates almonds anjeer be stress and anxiety free do yoga regularly do aerobics regularly communicat openly with you wife do kegel's and pause and squeeze technique do side by side entry or wife above entry take capsule tentex royal by himalya for two months as mentioned above the container take tablet confido by himalya as mentioned above the container consulting a good sexologist is always good before doing anything Hi headaches are caused due to sinusitis which may not be noticed as you must not be knowing its symptoms. Take following medicines nat suph 30 4pills to be sucked thrice a day for 15 days kali bich 200 4pills to be sucked thrice a day for 15 days take plain water steam once a day avoid eating curd, icecreams, pickles, papad, citrus fruits, watermelon, green skin bananas, pineapple, strawberries, custard apple, guavas. For fever take tablet paracetamol 650 mg and For cold take tablet cetrizine at night and For cough take syp ascoril-D 2.5 ml twice a day and Get your blood checked for cbc, mp , widal , sgpt and urine r/m and revert back to us with reports Baking soda is a good way to get rid of red marks on face. When it is made into a paste and applied onto the face, the baking soda exfoliates your skin to minimize annoying acne scars. Mix one teaspoon of baking soda with two teaspoons water and leave on skin for a while before rinsing off.
1
0.829063
0.027172
Foster + Partners revealed its initial design of The One, Mizrahi Developments’ 860,300-square-foot skyscraper project in Toronto, in 2015 and now the architectural firm’s final vision is about to take shape—literally. Mizrahi recently broke ground on the 85-story mixed-use tower, which, at approximately 1,004 feet (or 306 meters) tall, will take on the title of the tallest building in Canada. Sited at the high-profile intersection of Yonge and Bloor streets, The One will act as a link of sorts between downtown Toronto and the trendy Yorkville district. Foster has produced a cutting-edge design that fits right into the established neighborhood. “The project creates a new anchor for high-end retail along Bloor Street West, while respecting the urban scale of Yonge Street. The design is respectful of the legacy of the William Luke Buildings, and incorporates the historic 19th century brick structures within the larger development,” Giles Robinson, senior partner at Foster + Partners, said in a prepared statement. Rendering of The One in Toronto The One will feature several levels of retail and restaurant space topped by approximately 420 luxury condominium residences, with the building’s distinctive façade offering indication of where the commercial portion of the structure ends and the residential segment begins. Additionally, as noted in an article by The Globe and Mail, the final design also features a 175-key hotel. CORE Architects is the collaborating architect on The One. The development is scheduled to reach completion in 2022. In the meantime, Foster’s projects continue to change skylines across the globe. Sky-high endeavors The attention that will accompany The One’s soaring height will be familiar territory for Foster. The firm designed MOL Campus in Budapest, Hungary, an 893,000-square-foot, 400-foot-tall high-rise project that will serve as oil and gas company MOL Group’s new global office headquarters in Budapest and carry the distinction of being the tallest building in the city. And at the mixed-use development Varso Place in downtown Warsaw, Poland, Foster is the visionary behind the 1,018-foot-tall Verso Tower, which will be the tallest office building in Central and Eastern Europe. Tall buildings, those exceeding 200 meters (656 feet), are on the rise around the world. A total of 128 such structures delivered in 2016, marking a new annual record and bringing the total number of existing tall buildings to 1,168, a whopping 441 percent increase from the year 2000, according to a report by the Council on Tall Buildings and Urban Habitat. Ten supertall buildings, which are 300 meters (984 feet) or greater in height, came online in 2016. And as for the title of the tallest, 18 finished buildings became the tallest in a city, country or region in 2016.
1
1.056825
0.598013
// Testing Authentication API Routes // 🐨 import the things you'll need // 💰 here, I'll just give them to you. You're welcome // import axios from 'axios' // import {resetDb} from 'utils/db-utils' // import * as generate from 'utils/generate' // import startServer from '../start' // 🐨 you'll need to start/stop the server using beforeAll and afterAll // 💰 This might be helpful: server = await startServer({port: 8000}) // 🐨 beforeEach test in this file we want to reset the database test('auth flow', async () => { // 🐨 get a username and password from generate.loginForm() // // register // 🐨 use axios.post to post the username and password to the registration endpoint // 💰 http://localhost:8000/api/auth/register // // 🐨 assert that the result you get back is correct // 💰 it'll have an id and a token that will be random every time. // You can either only check that `result.data.user.username` is correct, or // for a little extra credit 💯 you can try using `expect.any(String)` // (an asymmetric matcher) with toEqual. // 📜 https://jestjs.io/docs/en/expect#expectanyconstructor // 📜 https://jestjs.io/docs/en/expect#toequalvalue // // login // 🐨 use axios.post to post the username and password again, but to the login endpoint // 💰 http://localhost:8000/api/auth/login // // 🐨 assert that the result you get back is correct // 💰 tip: the data you get back is exactly the same as the data you get back // from the registration call, so this can be done really easily by comparing // the data of those results with toEqual // // authenticated request // 🐨 use axios.get(url, config) to GET the user's information // 💰 http://localhost:8000/api/auth/me // 💰 This request must be authenticated via the Authorization header which // you can add to the config object: {headers: {Authorization: `Bearer ${token}`}} // Remember that you have the token from the registration and login requests. // // 🐨 assert that the result you get back is correct // 💰 (again, this should be the same data you get back in the other requests, // so you can compare it with that). })
2
0.404908
0.92935
/* * Copyright (C) 2012 Open Source Robotics Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ #include "gazebo/transport/CallbackHelper.hh" using namespace gazebo; using namespace transport; unsigned int CallbackHelper::idCounter = 0; ///////////////////////////////////////////////// CallbackHelper::CallbackHelper(bool _latching) : latching(_latching), id(idCounter++) { } ///////////////////////////////////////////////// CallbackHelper::~CallbackHelper() { } ///////////////////////////////////////////////// std::string CallbackHelper::GetMsgType() const { return std::string(); } ///////////////////////////////////////////////// bool CallbackHelper::GetLatching() const { std::lock_guard<std::mutex> lock(this->latchingMutex); return this->latching; } ///////////////////////////////////////////////// void CallbackHelper::SetLatching(bool _latch) { std::lock_guard<std::mutex> lock(this->latchingMutex); this->latching = _latch; } ///////////////////////////////////////////////// unsigned int CallbackHelper::GetId() const { return this->id; }
1
0.668897
0.968167
Dr. Stranger” Park Hae Jin And Kang So Ra’s Hearts Do Not Align People who have loved realize how hard it is to have a mutual love. This is true for those in a one sided love and that which is mutual. He has the background and the looks but because he couldn’t love on his own, he made viewers cry. In the drama, Han Jae Joon plays the role of a person whose father dies in a medical malpractice incident and is determined to ruin Myungwoo University Hospital. Despite the fact that he has a fancy background of being the assistant professor of Harvard University, he becomes the head of Myungwoo University and commits to a love with Oh Soo Hyun. This was all a process to gain revenge and even from the very beginning of the drama, Han Jae Joon looked at Oh Soo Hyun in a cold way. He looked sincere when he was in front of her but Oh Soo Hyun couldn’t see his eyes, he was in a state where he would show his own ambitions. Nonetheless, his heart was lost to Oh Soo Hyun after a while. He said that it wasn’t love but in the end the princess was more than just a tool for the destruction of the castle. Jong Suk that was born in South Korea but raised in North Korea and his conflict against the most elite doctor of Korea, Park Hae Jin. The two of them face the greatest conspiracy in a medical drama and fusion drama. It is broadcast every Monday and Tuesday at 10PM.
1
0.480558
0.051363
Shader "Hidden/BrightPassFilter2" { Properties { _MainTex ("Base (RGB)", 2D) = "" {} } CGINCLUDE #include "UnityCG.cginc" struct v2f { float4 pos : SV_POSITION; float2 uv : TEXCOORD0; }; sampler2D _MainTex; half4 _MainTex_ST; half4 _Threshhold; v2f vert( appdata_img v ) { v2f o; o.pos = UnityObjectToClipPos(v.vertex); o.uv = UnityStereoScreenSpaceUVAdjust(v.texcoord.xy, _MainTex_ST); return o; } half4 fragScalarThresh(v2f i) : SV_Target { half4 color = tex2D(_MainTex, i.uv); color.rgb = color.rgb; color.rgb = max(half3(0,0,0), color.rgb-_Threshhold.xxx); return color; } half4 fragColorThresh(v2f i) : SV_Target { half4 color = tex2D(_MainTex, i.uv); color.rgb = max(half3(0,0,0), color.rgb-_Threshhold.rgb); return color; } ENDCG Subshader { Pass { ZTest Always Cull Off ZWrite Off CGPROGRAM #pragma vertex vert #pragma fragment fragScalarThresh ENDCG } Pass { ZTest Always Cull Off ZWrite Off CGPROGRAM #pragma vertex vert #pragma fragment fragColorThresh ENDCG } } Fallback off }
1
0.923879
0.999967
syntax = "proto3"; package types; // For more information on gogo.proto, see: // https://github.com/gogo/protobuf/blob/master/extensions.md import "github.com/gogo/protobuf/gogoproto/gogo.proto"; import "github.com/tendermint/tendermint/crypto/merkle/merkle.proto"; import "github.com/tendermint/tendermint/libs/common/types.proto"; import "google/protobuf/timestamp.proto"; // This file is copied from http://github.com/tendermint/abci // NOTE: When using custom types, mind the warnings. // https://github.com/gogo/protobuf/blob/master/custom_types.md#warnings-and-issues option (gogoproto.marshaler_all) = true; option (gogoproto.unmarshaler_all) = true; option (gogoproto.sizer_all) = true; option (gogoproto.goproto_registration) = true; // Generate tests option (gogoproto.populate_all) = true; option (gogoproto.equal_all) = true; option (gogoproto.testgen_all) = true; //---------------------------------------- // Request types message Request { oneof value { RequestEcho echo = 2; RequestFlush flush = 3; RequestInfo info = 4; RequestSetOption set_option = 5; RequestInitChain init_chain = 6; RequestQuery query = 7; RequestBeginBlock begin_block = 8; RequestCheckTx check_tx = 9; RequestDeliverTx deliver_tx = 19; RequestEndBlock end_block = 11; RequestCommit commit = 12; } } message RequestEcho { string message = 1; } message RequestFlush { } message RequestInfo { string version = 1; uint64 block_version = 2; uint64 p2p_version = 3; } // nondeterministic message RequestSetOption { string key = 1; string value = 2; } message RequestInitChain { google.protobuf.Timestamp time = 1 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true]; string chain_id = 2; ConsensusParams consensus_params = 3; repeated ValidatorUpdate validators = 4 [(gogoproto.nullable)=false]; bytes app_state_bytes = 5; } message RequestQuery { bytes data = 1; string path = 2; int64 height = 3; bool prove = 4; } message RequestBeginBlock { bytes hash = 1; Header header = 2 [(gogoproto.nullable)=false]; LastCommitInfo last_commit_info = 3 [(gogoproto.nullable)=false]; repeated Evidence byzantine_validators = 4 [(gogoproto.nullable)=false]; } enum CheckTxType { New = 0; Recheck = 1; } message RequestCheckTx { bytes tx = 1; CheckTxType type = 2; } message RequestDeliverTx { bytes tx = 1; } message RequestEndBlock { int64 height = 1; } message RequestCommit { } //---------------------------------------- // Response types message Response { oneof value { ResponseException exception = 1; ResponseEcho echo = 2; ResponseFlush flush = 3; ResponseInfo info = 4; ResponseSetOption set_option = 5; ResponseInitChain init_chain = 6; ResponseQuery query = 7; ResponseBeginBlock begin_block = 8; ResponseCheckTx check_tx = 9; ResponseDeliverTx deliver_tx = 10; ResponseEndBlock end_block = 11; ResponseCommit commit = 12; } } // nondeterministic message ResponseException { string error = 1; } message ResponseEcho { string message = 1; } message ResponseFlush { } message ResponseInfo { string data = 1; string version = 2; uint64 app_version = 3; int64 last_block_height = 4; bytes last_block_app_hash = 5; } // nondeterministic message ResponseSetOption { uint32 code = 1; // bytes data = 2; string log = 3; string info = 4; } message ResponseInitChain { ConsensusParams consensus_params = 1; repeated ValidatorUpdate validators = 2 [(gogoproto.nullable)=false]; } message ResponseQuery { uint32 code = 1; // bytes data = 2; // use "value" instead. string log = 3; // nondeterministic string info = 4; // nondeterministic int64 index = 5; bytes key = 6; bytes value = 7; merkle.Proof proof = 8; int64 height = 9; string codespace = 10; } message ResponseBeginBlock { repeated Event events = 1 [(gogoproto.nullable)=false, (gogoproto.jsontag)="events,omitempty"]; } message ResponseCheckTx { uint32 code = 1; bytes data = 2; string log = 3; // nondeterministic string info = 4; // nondeterministic int64 gas_wanted = 5; int64 gas_used = 6; repeated Event events = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="events,omitempty"]; string codespace = 8; } message ResponseDeliverTx { uint32 code = 1; bytes data = 2; string log = 3; // nondeterministic string info = 4; // nondeterministic int64 gas_wanted = 5; int64 gas_used = 6; repeated Event events = 7 [(gogoproto.nullable)=false, (gogoproto.jsontag)="events,omitempty"]; string codespace = 8; } message ResponseEndBlock { repeated ValidatorUpdate validator_updates = 1 [(gogoproto.nullable)=false]; ConsensusParams consensus_param_updates = 2; repeated Event events = 3 [(gogoproto.nullable)=false, (gogoproto.jsontag)="events,omitempty"]; } message ResponseCommit { // reserve 1 bytes data = 2; } //---------------------------------------- // Misc. // ConsensusParams contains all consensus-relevant parameters // that can be adjusted by the abci app message ConsensusParams { BlockParams block = 1; EvidenceParams evidence = 2; ValidatorParams validator = 3; } // BlockParams contains limits on the block size. message BlockParams { // Note: must be greater than 0 int64 max_bytes = 1; // Note: must be greater or equal to -1 int64 max_gas = 2; } // EvidenceParams contains limits on the evidence. message EvidenceParams { // Note: must be greater than 0 int64 max_age = 1; } // ValidatorParams contains limits on validators. message ValidatorParams { repeated string pub_key_types = 1; } message LastCommitInfo { int32 round = 1; repeated VoteInfo votes = 2 [(gogoproto.nullable)=false]; } message Event { string type = 1; repeated common.KVPair attributes = 2 [(gogoproto.nullable)=false, (gogoproto.jsontag)="attributes,omitempty"]; } //---------------------------------------- // Blockchain Types message Header { // basic block info Version version = 1 [(gogoproto.nullable)=false]; string chain_id = 2 [(gogoproto.customname)="ChainID"]; int64 height = 3; google.protobuf.Timestamp time = 4 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true]; int64 num_txs = 5; int64 total_txs = 6; // prev block info BlockID last_block_id = 7 [(gogoproto.nullable)=false]; // hashes of block data bytes last_commit_hash = 8; // commit from validators from the last block bytes data_hash = 9; // transactions // hashes from the app output from the prev block bytes validators_hash = 10; // validators for the current block bytes next_validators_hash = 11; // validators for the next block bytes consensus_hash = 12; // consensus params for current block bytes app_hash = 13; // state after txs from the previous block bytes last_results_hash = 14;// root hash of all results from the txs from the previous block // consensus info bytes evidence_hash = 15; // evidence included in the block bytes proposer_address = 16; // original proposer of the block } message Version { uint64 Block = 1; uint64 App = 2; } message BlockID { bytes hash = 1; PartSetHeader parts_header = 2 [(gogoproto.nullable)=false]; } message PartSetHeader { int32 total = 1; bytes hash = 2; } // Validator message Validator { bytes address = 1; //PubKey pub_key = 2 [(gogoproto.nullable)=false]; int64 power = 3; } // ValidatorUpdate message ValidatorUpdate { PubKey pub_key = 1 [(gogoproto.nullable)=false]; int64 power = 2; } // VoteInfo message VoteInfo { Validator validator = 1 [(gogoproto.nullable)=false]; bool signed_last_block = 2; } message PubKey { string type = 1; bytes data = 2; } message Evidence { string type = 1; Validator validator = 2 [(gogoproto.nullable)=false]; int64 height = 3; google.protobuf.Timestamp time = 4 [(gogoproto.nullable)=false, (gogoproto.stdtime)=true]; int64 total_voting_power = 5; } //---------------------------------------- // Service Definition service ABCIApplication { rpc Echo(RequestEcho) returns (ResponseEcho) ; rpc Flush(RequestFlush) returns (ResponseFlush); rpc Info(RequestInfo) returns (ResponseInfo); rpc SetOption(RequestSetOption) returns (ResponseSetOption); rpc DeliverTx(RequestDeliverTx) returns (ResponseDeliverTx); rpc CheckTx(RequestCheckTx) returns (ResponseCheckTx); rpc Query(RequestQuery) returns (ResponseQuery); rpc Commit(RequestCommit) returns (ResponseCommit); rpc InitChain(RequestInitChain) returns (ResponseInitChain); rpc BeginBlock(RequestBeginBlock) returns (ResponseBeginBlock); rpc EndBlock(RequestEndBlock) returns (ResponseEndBlock); }
2
0.657209
0.99809
I think we have to deal with protocol type and check the directory consistencies based on that. Currently, confirmation check will check only namespace dirs. To check the shared edits, we can not use this logic. We have to do it depending on the shared journal type. for: bk jouranal..etc Also currently initialization of shareEditsDirs option also assumes that as file protocol. If we configure any other type it may not work. Uma Maheswara Rao G added a comment - 17/Apr/12 15:56 I think we have to deal with protocol type and check the directory consistencies based on that. Currently, confirmation check will check only namespace dirs. To check the shared edits, we can not use this logic. We have to do it depending on the shared journal type. for: bk jouranal..etc Also currently initialization of shareEditsDirs option also assumes that as file protocol. If we configure any other type it may not work. Currently I have created a patch which work with shared dir configured with file protocol. If any bookeeper related directory is configured then my patch will not fail the format. for(Iterator<URI> it = dirsToFormat.iterator(); it.hasNext();) { File curDir = new File(it.next().getPath()); // Its alright for a dir not to exist, or to exist (properly accessible) // and be completely empty. if (!curDir.exists() || (curDir.isDirectory() && FileUtil.listFiles(curDir).length == 0)) continue; curDir.exist() (which will check locally and return false) and user is not prompted for formatting this shared dir. I have another doubt If I format the HDFS cluster which used Bookeeper for shared storage, then ./hdfs namenode -format will not format the shared dir(bookeeper dir). Then how cluster works with older version details? amith added a comment - 17/Apr/12 16:23 I agree with uma. Currently I have created a patch which work with shared dir configured with file protocol. If any bookeeper related directory is configured then my patch will not fail the format. for (Iterator<URI> it = dirsToFormat.iterator(); it.hasNext();) { File curDir = new File(it.next().getPath()); // Its alright for a dir not to exist, or to exist (properly accessible) // and be completely empty. if (!curDir.exists() || (curDir.isDirectory() && FileUtil.listFiles(curDir).length == 0)) continue ; curDir.exist() (which will check locally and return false) and user is not prompted for formatting this shared dir. I have another doubt If I format the HDFS cluster which used Bookeeper for shared storage, then ./hdfs namenode -format will not format the shared dir(bookeeper dir). Then how cluster works with older version details? I think that for this JIRA we should punt on the other types of shared dirs besides file-based. I think we should make format look at the journal type and print something like "not formatting non-file journal manager..." How does that sound? At a later point in a different JIRA we can work on a more general initialization system which is totally agnostic to the type of journal manager. Aaron T. Myers added a comment - 17/Apr/12 18:14 I think that for this JIRA we should punt on the other types of shared dirs besides file-based. I think we should make format look at the journal type and print something like "not formatting non-file journal manager..." How does that sound? At a later point in a different JIRA we can work on a more general initialization system which is totally agnostic to the type of journal manager. Uma Maheswara Rao G added a comment - 17/Apr/12 18:32 How does that sound? At a later point in a different JIRA we can work on a more general initialization system which is totally agnostic to the type of journal manager. Sounds good to me. +1 Here is the JIRA to support shared edits dirs(other than file based). HDFS-3287 @Amith, you can go ahead with this change as a limitation of non-file based shared dirs. Hadoop QA added a comment - 18/Apr/12 19:42 +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12523215/HDFS-3275.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2297//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2297//console This message is automatically generated. Uma Maheswara Rao G added a comment - 18/Apr/12 20:50 Amith, thanks a lot for working on this issue. I just reviewed your patch! Some comments: 1) File base_dir = new File(System.getProperty("test.build.data", + "build/test/data"), "dfs/"); can't we use getBaseDirectory from minidfs cluster? 2) NameNode.format(conf); // Namenode should not format dummy or any other + // non file schemes instead of wrapping the comment into two lines, can we add it above to the foamt call? 3) System .err + .println( "Storage directory " + + dirUri + + " is not in file scheme currently formatting is not supported for this scheme" ); can you please format this correctly? ex: System .err.println( "Storage directory " + " is not in file scheme currently " + "formatting is not supported for this scheme" ); 4) File curDir = new File(dirUri.getPath()); File will take uri also, so need not cnvert it to string right? 5) Also message can be like, 'Formatting supported only for file based storage directories. Current directory scheme is " scheme " . So, ignoring it for format" 6) HATestUtil#setFailoverConfigurations would do almost similar setup as in test. is it possible to use it by passing mock cluster or slightly changed HATestUtil#setFailoverConfigurations? 7)you mean "Could not delete hdfs directory '" -> "Could not delete namespace directory '" 8) testOnlyFileSchemeDirsAreFormatted -> testFormatShouldBeIgnoredForNonFileBasedDirs ? Uma Maheswara Rao G added a comment - 24/Apr/12 03:43 Patch looks good. Assert has been added in format api. So, test ensures that there is no exceptions out of it when we include non-file based journals. +1 Re-attaching the same patch as Amith to trigger Jenkins. Hadoop QA added a comment - 24/Apr/12 05:13 +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12523911/HDFS-3275_1.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 2 new or modified test files. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2316//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2316//console This message is automatically generated. Aaron T. Myers added a comment - 24/Apr/12 06:49 Patch looks pretty good to me. Just a few little comments. +1 once these are addressed: Don't declare the "DEFAULT_SCHEME" constant in the NameNode class. Instead, use the NNStorage.LOCAL_URI_SCHEME constant, which is used in FSEditLog to identify local edits logs. I think it's better to include the URI of the dir we're skipping, and the scheme we expect. So, instead of this: System .err.println( "Formatting supported only for file based storage" + " directories. Current directory scheme is \" " + dirUri.getScheme() + "\" . So, ignoring it for format"); How about something like this: System .err.println( "Skipping format for directory \" " + dirUri + "\" . Can only format local directories with scheme \"" + NNStorage.LOCAL_URI_SCHEME + "\" ."); "supported for" + dirUri; - put a space after "for" Odd javadoc formatting, and typo "with out" -> "without": + /** Sets the required configurations for performing failover. + * with out any dependency on MiniDFSCluster + * */ Recommend adding a comment to the assert in NameNode#confirmFormat that the presence of the assert is necessary for the validity of the test. Aaron T. Myers added a comment - 24/Apr/12 18:47 This comment still isn't formatted correctly, and I think you can remove the "." in this sentence. + /** Sets the required configurations for performing failover. + * without any dependency on MiniDFSCluster + */ Otherwise it looks good. +1. Uma Maheswara Rao G added a comment - 24/Apr/12 19:28 Amith, small comment + * Sets the required configurations for performing failover + * without any dependency on MiniDFSCluster Why do we need to mention that 'no dependancy on MiniDFSCluster'? Since this is a Util method, we need not mention this right? very sorry for not figuring out to you in my previous review. Thanks for your work! java.util.NoSuchElementException at java.util.AbstractList$Itr.next(AbstractList.java:350) at org.apache.hadoop.hdfs.server.namenode.NameNode.confirmFormat(NameNode.java:731) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:685) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:228) at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:122) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:680) Uma Maheswara Rao G added a comment - 24/Apr/12 19:56 Looks you have missed one line in HDFS-3275 _2.patch and HDFS-3275 _3.patch Below code from HDFS-3275 _1.patch + assert dirUri.getScheme().equals(DEFAULT_SCHEME) : "formatting is not " + + "supported for " + dirUri; + + File curDir = new File(dirUri.getPath()); // Its alright for a dir not to exist, or to exist (properly accessible) Please take care in next version of the patch. Hadoop QA added a comment - 28/Apr/12 20:13 +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12524985/HDFS-3275-4.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 2 new or modified test files. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2350//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2350//console This message is automatically generated.
1
0.621156
0.314034
We can’t wait for you to visit! Our Sales and Design Center as well as our eight model homes are open Monday through Saturday 9 a.m. to 6 p.m. and Sunday from noon to 6 p.m. For a personalized visit and an opportunity to tour the future amenity site, schedule an appointment by contacting us at 512-539-3700 or by filling out the form at our Schedule your Visit page. We’ll see you real soon! The name Kissing Tree refers to Sam Houston’s gubernatorial speech in 1857 in front of a mighty oak tree in San Marcos. After the speech, he famously kissed several of the female attendees on the cheek, creating a bit of a local legend. Watch the video about the legend here. The lifestyle, homes and amenities at Kissing Tree are created with the active adult lifestyle in mind. Under 55 and looking for a place to call home in San Marcos? Blanco Vista is a vibrant community situated on 575 acres of prime riverfront land in Northern San Marcos. Brookfield Residential is the developer for this expansive master-planned community which caters to every stage of life. The community offers a wide array of first-class amenities – from a fully-stocked fishing pond to a network of interconnected hike and bike trails. Click here to discover the Blanco Vista community. Kissing Tree is a planned 3,200-home community. We currently offer 18 floor plans and five architecturally distinct exteriors which work together to create an eclectic and diverse community streetscape. The HOA fee will include all of the maintenance of the community common areas, access to the amenity buildings, 24/7 security and reduced green fees for residents at the Kissing Tree Golf Course. The HOA assessment is anticipated to be $210 per month. Kissing Tree provides options that allow you to live life the way you want. We offer several landscape options and allow you to design the best plan for your lifestyle including low maintenance designs. We look forward to providing products that fit your lifestyle. We are not currently offering Garden Homes in our first two neighborhoods, Fair Park and Driskill, however, there may be opportunities for additional products in future development. There’s an unlimited amount of fun activities to do both indoors and outdoors right around the corner from Kissing Tree. Head over to our locality map to find out more about where you can create, taste and thrive in San Marcos! There is not RV parking on the Kissing Tree property, however, RV parking can be found just outside the community across the street on Hunter Road. You’ll find ample space for your covered and climate-controlled items at several nearby storage facilities. The Mix is Kissing Tree’s one-of-a-kind collection of amenities that brings an active and fun approach to this unique 55-plus community. An upbeat social hub, The Mix will include a mix of amenities that allow you to thrive, create and taste! At Kissing Tree there is a unique focus on health and well-being, foods and flavors, and arts and cultures. 8 pickleball courts, 6 bocce ball courts, 2 horseshoe pits, 3 holes of the future 18-hole putting course, driving range, short game practice area and Lone Star Loop hiking trail are now open! The golf course construction is in full swing and scheduled to open for play in late summer 2018 with a temporary clubhouse until permanent construction is complete. The social building, Independence Hall, and Welcome Center will open this year as well. Be sure to follow us on social media to learn more about the timing of future amenities! We are having fun at Kissing Tree! You can get into all kinds of activities any day of the week including activities focused on health and well-being, the food revolution and arts and culture. Explore the fun of these themes on our site by following the icons for Thrive, Taste and Create! Join us even before moving in by signing up for one of our events here: Kissingtree.com/events Join us for some fun while we thrive, create and taste at our distinctly Texan community! Kevin Wilson, Kissing Tree’s Lifestyle Director, will get you jumping into the list of activities for 2017. To find out more, contact Kevin at [email protected] or 210-336-2227. Take a look at the great list of Kissing Tree and Hill Country events on our event page here: Kissingtree.com/events The Kissing Tree Golf Course will be semi-private with priority tee times and discounted rates provided to residents. The course will be open to the public with discounts given to San Marcos residents. Brookfield Residential is the sole developer and homebuilder for Kissing Tree. Through our expertise, passion and focus on outstanding customer service, we strive to create the best places to call home. At every stage of life, our thoughtfully designed communities make it easy for buyers to find their dream home. For more information, visit BrookfieldTX.com Kissing Tree homes are built with a Texas attitude, and each home can be made your own with a variety of architectural styles to choose from, as well, as an array of finishes, options, colors and features. Our homes are built with industry-leading green and sustainable practices and incorporate the latest in energy efficiency. With 18 floor plans available, the plans reflect Brookfield Residential’s focus on thoughtfully designing homes with the homebuyer in mind. View our plans here. The two home series allow you to choose what’s most important in your home. The most significant differences between the series include a higher ceiling plate height in the Regent series, along with 360 degree architecture around the exterior of the home. The Designer Contract allows you to build your home from the included 18 floor plans and five architecturally distinct exteriors. The Distinctive Contract provides the opportunity to make custom architectural changes to your floorplan. The Distinctive home buyer is invited to select home sites in future phases of the development. You can make the home uniquely yours by having the freedom to rearrange elements of the floor plan for a more customized design. The Distinctive Contract is a program which allows you to make selections and changes that are not included in the standard portfolio of offerings. We currently have seven Quick Move-in Homes under construction and ready for move-in March of 2017. Because of the high interest in the community, we have a simple reservation program that allows you to save your place in line to select your future home site. For more information on this program or to set an appointment, please reach out to a helpful team member at [email protected] or 512-539-3700. Brookfield Residential Texas, a division of Brookfield Residential, is a full-service homebuilder and developer in Central Texas. Through expertise, passion and focus on outstanding customer service, we’ve been helping homebuyers find the best places to call home in Central Texas for more than 10 years. At every stage of life, our thoughtfully designed communities and homes make it easy for buyers to fulfill their dreams. For more than 50 years, Brookfield Residential has been developing communities and crafting homes of distinction throughout North America. For the last decade, we've been setting down roots in Central Texas, right here in the Greater Austin area. So that Texas accent you hear – it comes naturally. We can’t wait to share our community with you! We’re excited to offer our Realtor friends exclusive access to our golf course, clubhouse and amenities. “The Grove” is our Realtor program, and we can’t wait to tell you more in the spring of 2017!
1
1.211865
0.031113
Casper Star Tribune: Warning bells are ringing across Wyoming’s Powder River Basin that the largest producing coal region of the country is in big trouble. One of the largest players, Cloud Peak Energy, is likely facing bankruptcy. A newcomer to coal country, Blackjewel LLC has struggled to pay its taxes despite increasing production, and the total volume of Wyoming’s black rock that miners are estimated to produce – a number that translates to jobs, state and county revenue — keeps going down. After the coal bust of 2015, when 1,000 Wyoming miners lost work and three coal companies went through bankruptcy, a period of stability settled over the coal sector in Wyoming. The idea that coal would slowly decline, partly buoyed up by the results of carbon research, and just maybe an export avenue to buyers in the Pacific Rim, took hold. Wyoming made its peace with the idea that coal’s best years were likely behind her, but that a more modest future for Wyoming coal, with manageable losses over time, was also likely. That may not be the case. Within 10 years, demand for Powder River Basin coal could fall to 176 million tons, said John Hanou, president of Hanou Energy Consulting and a long-time expert on the Powder River Basin. That figure includes Montana’s production and presumes that coal plants in the U.S. are taken offline as soon as they hit 60 years of age. If Wyoming is lucky and gas prices are high, that count could hold closer to 224 million. Or it could be even worse. Economics could push out existing demand even faster, while wind development going up in the Midwest could eat into Wyoming’s coal market in that region. Natural gas prices, high or low, could alter the rate of change in Wyoming’s coal sector. More: Wyoming coal is likely declining faster than expected
2
0.938972
0.046091
Heath® Made with a classic candy bar favorite, the Heath Bar! Delicious bits of milk chocolate covered English Toffee are mixed throughout and sprinkled on top of a delicious, hand-dipped vanilla Milkshake for plenty of craveable Heath Bar flavor.
0
0.198263
0.001386
Hollis Johnson/Business Insider Andrew Yang, the 2020 Democratic presidential hopeful, called out WeWork in a tweet on Wednesday. He called the company's $47 billion valuation "utterly ridiculous," agreeing with New York University professor Scott Galloway's piece on Business Insider. WeWork has come under fire for multiple bizarre points uncovered in its S-1 filing ahead of its initial public offering. Read more on Markets Insider. The WeWork backlash continues. Andrew Yang, the 2020 presidential hopeful most popular for proposing universal income of $1,000 per month, tweeted his support for NYU Professor Scott Galloway's piece on Business Insider calling WeWork "WeWTF" on Wednesday. "For what it's worth I agree with @profgalloway that WeWork's valuation is utterly ridiculous," Yang tweeted. "If they are a tech company so is UPS. UPS trades for 1.4x revenue not 26x." WeWork currently carries a valuation of $47 billion, and says it expects revenue to be $3 billion this year. Galloway poked holes in the valuation in his piece, calling it an illusion and saying "any equity analyst who endorses this stock above a $10 billion valuation is lying, stupid, or both." In his tweet, Yang pointed out that the United Parcel Service trades at about 1.4 times its revenue. If WeWork is considered a tech company, Yang wrote, then UPS should be as well. Even within the world of tech, Galloway points out that WeWork's valuation is extremely high and — in his view — unfounded. Amazon, another tech-adjacent e-commerce company, trades at about four times its revenue, he wrote. WeWork has been in the spotlight recently after filing its preliminary paperwork for an upcoming initial public offering. Analysts have called the company cultish, called out its extreme $1.6 billion in losses, and said that it operates more like a real estate company than a tech company. Markets Insider is looking for a panel of millennial investors. If you're active in the markets, CLICK HERE to sign up. NOW WATCH: Here's what airlines legally owe you if you're bumped off a flight
1
0.438729
0.025549
--- abstract: 'We report on the analysis of the [[*Kepler *]{}]{}short-cadence (SC) light curve of V344 Lyr obtained during 2009 June 20 through 2010 Mar 19 (Q2–Q4). The system is an SU UMa star showing dwarf nova outbursts and superoutbursts, and promises to be a touchstone for CV studies for the foreseeable future. The system displays both positive and negative superhumps with periods of 2.20 and 2.06-hr, respectively, and we identify an orbital period of 2.11-hr. The positive superhumps have a maximum amplitude of $\sim$0.25-mag, the negative superhumps a maximum amplitude of $\sim$0.8 mag, and the orbital period at quiescence has an amplitude of $\sim$0.025 mag. The quality of the [[*Kepler *]{}]{}data is such that we can test vigorously the models for accretion disk dynamics that have been emerging in the past several years. The SC data for V344 Lyr are consistent with the model that two physical sources yield positive superhumps: early in the superoutburst, the superhump signal is generated by viscous dissipation within the periodically flexing disk, but late in the superoutburst, the signal is generated as the accretion stream bright spot sweeps around the rim of the non-axisymmetric disk. The disk superhumps are roughly anti-phased with the stream/late superhumps. The V344 Lyr data also reveal negative superhumps arising from accretion onto a tilted disk precessing in the retrograde direction, and suggest that negative superhumps may appear during the decline of DN outbursts. The period of negative superhumps has a positive $\dot P$ in between outbursts.' author: - 'Matt A. Wood, Martin D. Still, Steve B. Howell, John K. Cannizzo Alan P. Smale' title: 'V344 Lyrae: A Touchstone SU UMa Cataclysmic Variable in the Kepler Field' --- Introduction ============ Cataclysmic variable (CV) binary systems typically consist of low-mass main sequence stars that transfer mass though the L1 inner Lagrange point and onto a white dwarf primary via an accretion disk. Within the disk, viscosity acts to transport angular momentum outward in radius, allowing mass to move inward and accrete onto the primary white dwarf [e.g. @warner95; @fkr02; @hellier01]. In the case of steady-state accretion the disk is the brightest component of the system, with a disk luminosity $L_{\rm disk} \sim GM_1 \dot M_1/R_1$, where $\dot M_1$ is the mass accretion rate onto a white dwarf of mass $M_1$ and radius $R_1$. While members of the novalike (NL) CV subclass display a nearly constant mean system luminosity, members of the dwarf nova (DN) subclass display quasi-periodic outbursts of a few magnitudes thought to arise from a thermal instability in the disk. Specifically, models suggest a heating wave rapidly transitions the disk to a hot, high-viscosity state which significantly enhances $\dot M_1$ for a few days. Furthermore, within the DN subclass there are the SU UMa systems that in addition to normal DN outbursts display superoutbursts which are up to a magnitude brighter and last a few times longer than the DN outbursts. The SU UMa stars are characterized by the appearance at superoutburst of periodic large-amplitude photometric signals (termed [*positive superhumps*]{}) with periods a few percent longer than the system orbital periods. So-called [*negative*]{} superhumps (with periods a few percent shorter than ${P_{\rm orb}}$) are also observed in some SU UMa systems. The oscillation modes (i.e., eigenfrequencies) of any physical object are a direct function of the structure of that object, and thus an intensive study of SU UMa superhumps that can make use of both a nearly-ideal time-series data set as well as detailed three-dimensional high-resolution numerical models has the potential to eventually unlock many of the long-standing puzzles in accretion disk physics. For example, a fundamental question in astrophysical hydrodynamics is the nature of viscosity in differentially rotating plasma disks. It is typically thought to result from the magnetorotational instability (MRI) proposed by @bh98 [@balbus03], but the observations to-date have been insufficient to test the model. V344 Lyrae ---------- The [[*Kepler *]{}]{}field of view includes 12 CVs in the [[*Kepler *]{}]{}Input Catalog (KIC) that have published results at the time of this writing. Ten (10) of these systems are listed in Table 1 of @still10 [hereafter Paper I]. Two additional systems have been announced since that publication, the dwarf nova system BOKS-45906 (KIC 9778689) [@feldmeier11], and the AM CVn star SDSS J190817.07$+$394036.4 (KIC 4547333) [@fontaine11]. The star V344 Lyr (KIC 7659570) is a SU UMa star that lies in the [[*Kepler *]{}]{}field. @kato93 observed the star during a superoutburst ($V\sim14$), and reported the detection of superhumps with a period $P = 2.1948\pm 0.0005$ hr. In a later study @kato02 reported that the DN outbursts have a recurrence timescale of $16\pm3$ d, and that the superoutbursts have a recurrence timescale of $\sim$110 d. @ak08 estimated a distance of 619 pc for the star using a period-luminosity relationship. In Paper I we reported preliminary findings for V344 Lyr based on the second-quarter (Q2) [[*Kepler *]{}]{}observations, during which [[*Kepler *]{}]{}observed the star with a $\sim$1-min cadence, obtaining over 123,000 photometric measurements. In that paper we reported on a periodic signal at quiescence that was either the orbital or negative superhump period, and the fact that the positive superhump signal persisted into quiescence and through the following dwarf nova outburst. In @cannizzo10 [hereafter Paper II] we presented time-dependent modeling based on the accretion disk limit cycle model for the 270 d (Q2–Q4) light curve of V344 Lyr. We reported that the main decay of the superoutbursts is nearly perfectly exponential, decaying at a rate of $\sim$12 d mag$^{-1}$, and that the normal outbursts display a decay rate that is faster-than-exponential. In addition, we noted that the two superoutbursts are initiated by a normal outburst. Using the standard accretion disk limit cycle model, we were able to reproduce the main features of the outburst light curve of V344 Lyr. We significantly expand on this in @cannizzo11 where we present the 1-year outburst properties of both V344 Lyr and V1504 Cyg (Cannizzo et al. 2011). In this work, we report in detail on the results obtained by studying the [[*Kepler *]{}]{}Q2–Q4 data, which comprise without question the single-best data set obtained to-date from a cataclysmic variable star. The data set reveals signals from the orbital period as well as from positive and negative superhumps. Review of Superhumps and Examples ================================= Before digging into the data, we briefly review the physical processes that lead to the photometric modulations termed superhumps. Positive superhumps and the two-source model -------------------------------------------- The accretion disk of a typical dwarf nova CV that is in quiescence has a low disk viscosity and so inefficient exchange of angular momentum. As a result, the mass transfer rate $\dot M_{\rm L1}$ through the inner Lagrange point L1 is higher than the mass transfer rate $\dot M_1$ onto the primary. Thus, mass accumulates in the disk until a critical surface density is reached at some annulus, and the fluid in that annulus transitions to a high-viscosity state [@cannizzo98; @cannizzo10]. This high-viscosity state propagates inward and/or outward in radius until the entire disk is in a high-viscosity state characterized by very efficient angular momentum and mass transport – the standard DN outburst [see, e.g., @cannizzo93; @lasota01 for reviews]. In this state, $\dot M_1 > \dot M_{\rm L1}$ and the disk drains mass onto the primary white dwarf. During each DN outburst, however, the angular momentum transport acts to expand the outer disk radius slightly, and after a few to several of these, an otherwise normal DN outburst can expand the outer radius of the disk to the inner Lindblad resonance (near the 3:1 corotation resonance). This can only occur for systems with mass ratios $q=M_2/M_1 \lesssim 0.35$ [@wts09]. Once sufficient mass is present at the resonance radius, the common superhump oscillation mode can be driven to amplitudes that yield photometric oscillations. The superhump oscillation has a period $P_+$ which is a few percent longer than the orbital period, where the [*fractional period excess*]{} $\epsilon_+$ is defined as $$\epsilon_+\equiv {P_+-{P_{\rm orb}}\over{P_{\rm orb}}}. \label{eq: eps+}$$ These are the are the so-called [*common*]{} or [*positive*]{} superhumps, where the latter term reflects the sign of the period excess $\epsilon_+$. In addition to the SU UMa stars, positive superhumps have also been observed in novalike CVs [@pattersonea93b; @retterea97; @skillmanea97; @patterson05; @kim09], the interacting binary white dwarf AM CVn stars [@pattersonea93a; @warner95amcvn; @nelemans05; @roelofs07; @fontaine11], and in low-mass X-ray binaries [@charlesea91; @mho92; @oc96; @retterea02; @hynesea06]. Figure \[fig: sph+\] shows snapshots from one full orbit of a smoothed particle hydrodynamics (SPH) simulation ($q=0.25$, 100,000 particles) as well as the associated simulation light curve [see @sw98; @wb07; @wts09]. The disk particles are color-coded by the change in internal energy over the previous timestep, and the Roche lobes and positions of $M_1$ are also shown. Panels 1 and 6 of Figure \[fig: sph+\] shows the geometry of the disk at superhump maximum. Note that here the superhump light source is viscous dissipation resulting from the compression of the disk opposite the secondary star. The local density and shear in this region are both high, leading to enhanced viscous dissipation in the strongly convergent flows. The orbit sampled in the Figure is characteristic of early superhumps where the disk oscillation mode is saturated, and the resulting amplitude significantly higher ($\sim$0.15 mag) than the models produce when dynamical equilibrium ($\sim$0.03 mag) due to the lower mean energy production in the models at superhump onset. As a further detail, we note that whereas the 2 spiral dissipation waves are stationary in the co-rotating frame before the onset of the superhump oscillation, once the oscillation begins, the spiral arms advance in the prograde direction by $\sim$180$^\circ$ in the co-rotating frame during each superhump cycle. This prograde advancement can be seen by careful inspection of the panels in Figure \[fig: sph+\]. Indeed, this motion of the spiral dissipation waves is central to the superhump oscillation – a spiral arm is “cast” outward as it rotates through the tidal field of the secondary, and then brightens shortly afterward as it compresses back into the disk in a converging flow [@smith07; @wts09]. While viscous dissipation within the periodically-flexing disk provides the dominant source of the superhump modulation, the accretion stream bright spot also provides a periodic photometric signal when sweeping around the rim of a non-axisymmetric disk [@vogt82; @osaki85; @whitehurst88; @kunze04]. The bright spot will be most luminous when it impacts most deeply in the potential well of the primary (e.g., panel 3 of Figure \[fig: sph+\], and fainter when it impacts the rim further from the white dwarf primary (Panels 1 and 6). This signal is swamped by the superhumps generated by the flexing disk early in the superoutburst, but dominates once the disk is significantly drained of matter and returns to low state. The disk will continue to oscillate although the driving is much diminished, and thus the stream mechanism will continue to yield a periodic photometric signal of decreasing amplitude until the oscillations cease completely. This photometric signal is what is termed [*late superhumps*]{} in the literature [e.g., @hessman92; @patterson00; @patterson02; @templeton06; @sterken07; @kato09; @kato10]. @rolfe01 presented a detailed study of the deeply eclipsing dwarf nova IY UMa observed during the late superhump phase where they found exactly this behavior. They used the shadow method @wood86 to determine the radial location of the bright spot (disk edge) in 22 eclipses observed using time-series photometry. They found that the disk was elliptical and precessing slowly at the beat frequency of the orbital and superhump frequencies, and that the brightness of the stream-disk impact region varied as the square of the relative velocity of the stream and disk material [see also @smak10]. Put another way, the bright spot was brighter when it was located on the periastron quadrant of the elliptical disk, and fainter on the apastron quadrant. Thus, two distinct physical mechanisms give rise to positive superhumps: viscous dissipation in the flexing disk, driven by the resonance with the tidal field of the secondary, and the time-variable viscous dissipation of the bright spot as it sweeps around the rim of a non-axisymmetric disk[^1]. For the remainder of this paper we refer to this as the [*two-source model of positive superhumps*]{} [see also @kunze02; @kunze04]. These two signals are approximately antiphased, and in systems where both operate at roughly equal amplitude, the Fourier transform of the light curve can show a larger amplitude for the second harmonic (first overtone) than for the fundamental (first harmonic). As an example of this double humped light curve, in Figure \[fig: en400420\] we show 20 orbits of the $q=0.25$ simulation discussed above (Figure \[fig: sph+\]) starting at orbit 400, by which time the system had settled into a state of dynamical equilibrium. The inset in this Figure shows the the average superhump pulse shape obtained from orbits 400-500 of the simulation, where we have set phase zero to primary minimum. Note that here the average pulse shape is complex but approximately double-peaked. The Fourier transform displays maximum power at twice the fundamental frequency. When we examine the disk profiles, we find that the dominant peak arises from the disk superhump described above, but the secondary peak roughly half a cycle later results from the impact of the bright spot deeper in the potential well of the primary (see panel 4 of Figure 1). The substructure of this secondary maximum results from the interaction of the accretion stream with the spiral arm structures that advance progradely in the co-rotating frame. Panel 3 of Figure \[fig: sph+\] is representative of the disk structure at the time of the the small dip in brightness observed at superhump phase 0.55. The dip is explained by the fact that the accretion stream bright spot at this phase is located in the low-density inter-arm region, and therefore that the accretion stream can dissipate its energy over a longer distance. In addition the oscillating disk geometry results in this region having a larger radius, and lower velocity contrast near this phase. @howell96 discuss the observation and phase evolution of the two secondary humps in the SU UMa system TV Corvi. The 3 AM CVn (helium CV) systems that are in permanent high state – AM CVn [@skillman99], HP Lib [@patterson02] and the system SDSS J190817.07+394036.4 (KIC 004547333) announced recently by @fontaine11 – all display average pulse shapes that are strongly double humped. AM CVn itself is frequently observed to show no power in the Fourier transform at the fundamental superhump oscillation frequency [@smak67; @ffw72; @patterson92; @skillman99]. AM CVn systems are known to be helium mass transfer systems with orbital periods ranging between 5 min and $\sim$1 hr [see reviews by @warner95amcvn; @solheim10]. In contrast, the hydrogen-rich old-novae and novalike CVs that show permanent superhumps display mean pulse shapes that are nearly always similar to the saturation phase light curves as shown in Figure \[fig: sph+\], and there is no example we know of where a permanent superhump system shows a strong double-humped light curve. The reason for this is clear upon reflection: the AM CVn disks are physically much smaller than the disks in systems with hydrogen-rich secondary stars, resulting in a much higher specific kinetic energy to be dissipated at the bright spot since the disk rim is much deeper in the potential well of the primary. The smaller disk may also yield a smaller amplitude for the disk oscillation signal. In the hydrogen-rich systems in permanent outburst, the disks are large, the mass transfer rates are high, and the disk signal dominates, with a relatively minor contribution from the stream source. We tested the viability of the two-source model through three additional numerical experiments. First, we again restarted the above simulation at orbit 400, but now with the accretion flow through L1 shut off completely. In this run, there is no accretion stream and hence no bright spot contribution. We show the first 20 orbits of the simulation light curve in Figure \[fig: en400420ns\]. With the stream present, the light curve has the double-humped shape of Figure \[fig: en400420\] above, but without the stream the light curve is sharply peaked with no hint of a double hump. Note that because there is no low-specific-angular-momentum material accreting at the edge of the disk, the disk can expand further into the driving zone. This expansion results in the pulse shape growing in amplitude as the mean disk luminosity drops. The pulse shape averaged over orbits 410-440 is shown as an inset in the Figure, and clearly shows that the oscillating disk is the only source of modulation in the light curve – maximum brightness corresponds to a disk geometry like that from panel 1 of Figure \[fig: sph+\] above. The mean brightness is roughly constant for orbits 410-440, and at orbit 440 the mean brightness and pulse amplitude begin to decline as some 50% of the initially-present SPH disk particles are accreted by orbit 450. Our second test was to restart the simulation a third time at orbit 400, but this time to enhance the injection rate of SPH particles (mass flow) at L1 by roughly a factor of 2 over that required to keep the disk particle count constant (Figure \[fig: en400420burst\]). This enhanced mass flux again dramatically changes the character of the light curve. Here the mean pulse shape as shown in the inset is saw-toothed, but with the substructure near the peak from the interaction of the stream with the periodic motion of the spiral features in the disk as viewed in the co-rotating frame. Careful comparison of the times of maximum in these two runs (Figures \[fig: en400420ns\] and \[fig: en400420burst\]) reveals that they are antiphased with each other. For example, the simulation light curve in Figure \[fig: en400420ns\] shows maxima at times of 403.0 and 404.0 orbits, whereas the simulation light curve in Figure \[fig: en400420burst\] shows minima at these same times. Our third experiment was more crude, but still effective. We began with a disk from a $q=0.2$ low-viscosity SPH simulation run that was in a stable, non-oscillating state. We offset all of the the SPH particles an amount $0.03a$ along the line of centers \[i.e., $(x,y,z)\rightarrow (x+0.03a,y,z)$\], scaled the SPH particle speeds (but not directions) using the [*vis viva*]{} equation $$v^2 = GM_1\left({{2\over r}-{1\over a}}\right),$$ and restarted the simulation. This technique gives us disk which is non-axisymmetric but not undergoing the superhump oscillation. The results were as expected: we find maxima in the simulation light curves at the phases where the accretion stream impacts the disk edge deepest in the potential well of the primary. In summary, numerical simulations reproduce the two-source model for positive superhumps. Negative Superhumps ------------------- Photometric signals with periods a few percent shorter than ${P_{\rm orb}}$ have also been observed in several DN, novalikes, and AM CVn systems – in some cases simultaneously with positive superhumps [see, e.g., Table 2 of @wts09 and Woudt et al. 2009]. These oscillations have been termed [*negative*]{} superhumps owing to the sign of the period “excess” obtained using Equation \[eq: eps+\]. The system TV Col was the first system to show this signal, and @bbmm85 suggested that the periods were consistent with what would be expected for a disk that was tilted out of the orbital plane and freely precessing with a period of $\sim$4 d. @bow88 expanded on this and suggested what is now the accepted model for the origin of negative superhumps: the transit of the accretion stream impact point across the face of a tilted accretion disk that precesses in the retrograde direction [see @wms00; @wb07; @wts09; @foulkes06]. As in the stream source for positive superhumps, the modulation results because the accretion stream impact point has a periodically-varying depth in the potential well of the primary star. Finding the term “negative period excess” unnecessarily turgid, in this work we refer to the [*period deficit*]{} $\epsilon_-$ defined as $$\epsilon_-\equiv {{P_{\rm orb}}- P_-\over{P_{\rm orb}}}. \label{eq: eps-}$$ Empirically, it is found that for systems showing both positive and negative superhumps that $\epsilon_+/\epsilon_-\sim2$ [@patterson99; @retterea02]. We show in Figure \[fig: sph-\] a snapshot from a $q=0.40$ simulation that demonstrates the physical origin of negative superhumps. At orbit 400, the disk particles were tilted $5^\circ$ about the $x$-axis and the simulation restarted. The green line in the Figure running diagonally though the primary indicates the location of the line of nodes; the disk midplane includes this line, but is below the orbital plane to the right of the line, and above the orbital plane to the left of the line. The disk particles are again color-coded by luminosity, and the brightest particles are shown with larger symbols. The ballistic accretion stream can be followed from the L1 point to the impact point near the line of nodes. The simulation light curve is derived from the “surface” particles as described in @wb07. The times of maximum of the negative superhump light curve occur when accretion stream impact point is deepest in the potential of the primary and on the side of the disk facing the observer. A second observer viewing the disk from the opposite side would still see negative superhumps, but antiphased to those of the first. Having introduced a viable model for positive superhumps and their evolution, let us now compare the model to the [[*Kepler *]{}]{}V344 Lyr photometry. [[*Kepler *]{}]{}Photometric Observations ========================================= The primary science mission of the NASA Discovery mission [[*Kepler *]{}]{}is to discover and characterize terrestrial planets in the habitable zone of Sun-like stars using the transit method [@borucki10; @haas10]. The spacecraft is in an Earth-trailing orbit, allowing it to view its roughly 150,000 target stars continuously for the 3.5-yr mission lifetime. The photometer has no shutter and stares continuously at the target field. Each integration lasts 6.54 s. Due to memory and bandwidth constraints, only data from the pre-selected target apertures are kept. [[*Kepler *]{}]{}can observe up to 170,000 targets using the long-cadence (LC) mode, summing 270 integrations over 29.4 min, and up to 512 targets in the short-cadence (SC) mode, summing 9 integrations for an effective exposure time of 58.8 s. There are gaps in the [[*Kepler *]{}]{}data streams resulting from, for example, monthly data downloads using the high-gain antenna and quarterly 90$^\circ$ spacecraft rolls, as well as unplanned safe-mode and loss of fine point events. For further details of the spacecraft commissioning, target tables, data collection and processing, and performance metrics, see @haas10, @koch10, and @caldwell10. [[*Kepler *]{}]{}data are provided as quarterly FITS files by the Science Operations Center after being processed through the standard data reduction pipeline [@jenkins10]. The raw data are first corrected for bias, smear induced by the shutterless readout, and sky background. Time series are extracted using simple aperture photometry (SAP) using an optimal aperture for each star, and these “SAP light curves” are what we use in this study. The dates and times for the beginning and end of Q2, Q3 and Q4 are listed in Table \[tbl: quarters\]. [ccccc]{} Q2 & 55002.008 & 2009 Jun 20 00:11 & 55090.975 & 2009 Sep 17 11:26\ Q3 & 55092.712 & 2009 Sep 18 17:05 & 55182.007 & 2009 Dec 17 00:09\ Q4 & 55184.868 & 2009 Dec 19 20:49 & 55274.714 & 2010 Mar 19 17:07 \[tbl: quarters\] The full SAP light curve for [[*Kepler *]{}]{}quarters Q2, Q3, and Q4 is shown in flux units in Figure \[fig: lcrawflux3\]. In Figure 2 of Paper II we show the full SAP light curve in Kp magnitude units. As noted in Paper II and evident in Figure \[fig: lcrawflux3\], the superoutbursts begin as normal DN outbursts. The Q2 data begin at BJD 2455002.5098. For simplicity we will below refer to events as occurring on, for example, day 70, which should be interpreted to mean BJD 2455070 – that is we take BJD 2455000 to be our fiducial time reference. In this paper, we focus on the superhump and orbital signals present in the data. The outburst behavior of these data in the context of constraining the thermal-viscous limit cycle is published separately (Paper II). To remove the large-amplitude outburst behavior from the raw light curve – i.e., to high-pass filter the data – we subtracted a boxcar-smoothed copy of the light curve from the SAP light curve. The window width was taken to be the superhump cycle length (2.2 hr or 135 points). To minimize the effects of data gaps, we split the data into a separate file anytime we had a data gap of more than 1 cycle. This resulted in 10 data chunks. Once the data residual light curve was calculated, we again recombined the data into a single file. The results for Q2, Q3, and Q4 are shown in Figures \[fig: reslc1\], \[fig: reslc2\], and \[fig: reslc3\], respectively. We also calculated the fractional amplitude light curve by dividing the raw light curve by the smoothed light curve, and subtracting 1.0. However, as expected, the amplitudes of the photometric signals in the residual light curve are more nearly constant than those in the fractional amplitude light curve. This is because the superhump signals – both positive and negative – have amplitudes determined by physical processes within the disk that are not strong functions of the overall disk luminosity. The Fourier Transform ===================== In Figure \[fig: 2dDFT\] we show the discrete Fourier transform amplitude spectra for the current data set. We took the transforms over 2000 frequency points spanning 0 to 70 cycles per day. Each transform is of a 5 day window of the data, and the window was moved roughly 1/2 day between subsequent transforms. The color scale indicates the logarithm of the residual count light curve amplitude in units of counts per cadence. In Figure \[fig: 2dDFTzoom\] we show a magnified view including only frequencies 9.5 to 12.5 c/d to better bring out the 3 fundamental frequencies in the system. Figures \[fig: 2dDFT\] and \[fig: 2dDFTzoom\] are rich with information. The positive superhumps ($P_+ = 2.20$ hr) dominate the power for days $\sim$58–80 and $\sim$162–190. In Figure \[fig: 2dDFTzoom\] we see the that time evolution of the fundamental oscillation frequency is remarkably similar in both superoutbursts. The dynamics behind this are discussed below in §5.2 where the O-C diagrams are presented. Once the majority of the mass that will accrete during the event has done so, the disk transitions back to the low state. This occurs roughly 15 d after superhump onset for V344 Lyr. During this transition, the disk source of the superhump modulation fades with the disk itself, and the stream source of the superhump modulation begins to dominate. A careful inspection of Figure \[fig: 2dDFT\] shows that at this time of transition between disk and stream superhumps, there is comparable power in the second harmonic (first overtone) as found in the fundamental. The behavior of the light curve and Fourier transform are more clearly displayed in Figure \[fig: trans\] which shows 2 days of the light curve during the transition period, and the associated Fourier transforms. In both cases, the “knee” in the superoutburst light curve (see Figure \[fig: lcrawflux3\] occurs just past the midpoint of the data sets. Although the second harmonic is strong in transition phase, the pulse shape of the disk superhump signal is sharply peaked so the fundamental remains prominent in the Fourier transform (see Figure \[fig: trans\]). As can clearly be seen in Figure \[fig: 2dDFTzoom\], the orbital period of $2.10$ hr (11.4 c/d) only becomes readily apparent in the Q4 data, starting at about day 200, and it dominates the Q4 Fourier transforms. Once identified in Q4, the orbital frequency appears to show some power in the week before the first superoutburst in Q2, and between days $\sim$130 and the second superoutburst in Q3. Note, however, that the amplitude of the orbital signal is roughly 1 order of magnitude smaller than the amplitude of the negative superhump signal, and as much as 2 orders of magnitude smaller than the amplitude of the positive superhump signal. In these data, the orbital signal is found only when the positive or negative superhump signals are weak or absent. We discuss the physical reason for this below. Finally, we note that we searched the Fourier transform of our [[*Kepler *]{}]{}short-cadence (SC) data out to the Nyquist frequency of 8.496 mHz for any significant high frequency power which might for example indicate accretion onto a spinning magnetic primary star (i.e., intermediate polar or DQ Her behavior). We found no reliable detection of higher frequencies in the data, beyond the well-known spurious frequencies present in [[*Kepler *]{}]{}time series data at multiples of the LC frequency [$n\times0.566427$ mHz $=$ 48.9393 $\rm c\ d^{-1}$ @gilliland10]. For a full list of possible spurious frequencies in the SC data, see the [*Kepler Data Characteristics Handbook*]{}. The Orbital Period ------------------ The orbital period is the most fundamental clock in a binary system. In the original Q2 data presented by @still10, the only frequencies that were clearly present in the data were the 2.20-hr (10.9 c/d) superhump period and the period observed at 2.06-hr (11.7 c/d). In Paper I we identified this latter signal as the orbital period but discussed the possibility that it is a negative superhump period. The Q3 data revealed a marginal detection of a period of 2.11 hr (11.4 c/d), and this period is found to dominate the Q4 data (see Figure \[fig: q4dft\]). The average pulse shape for this signal averaged over days 200-275 is shown in Figure \[fig: avelcporb\]. We can now safely identify this 2.11 hr (11.4 c/d) signal as the system orbital period, which then indicates that the 2.06 hr (11.7 c/d) signal is a negative superhump. The orbital period was determined using the method of non-linear least squares fitting a function of the form $$y(t) = A \sin[2\pi(t-T_0)/P].$$ The results of the fit are $$\begin{aligned} P &=& 0.087904\pm3\times10^{-6}\rm\ d,\\ &=& 2.109696\pm7\times10^{-5}\rm\ hr,\\ T_0 &=& {\rm BJD}\ 2455200.2080\pm0.0006,\\ A &=& 7.8\pm 0.1\rm\ e^-\ s^{-1}.\end{aligned}$$ Note that the amplitude is only roughly 25 mmag – an order of magnitude or more smaller than the peak amplitudes of the positive and negative superhumps in the system. That an orbital signal exists indicates that the system is not face-on. The source of the orbital signal of a non-superhumping CV can be either the variable flux along the line of site from a bright spot that is periodically shadowed as it sweeps around the back rim of the disk, or the so-called reflection effect as the face of the secondary star that is illuminated by the UV radiation of the disk rotates in to and out of view [e.g., @warner95]. In Figure \[fig: 2dDFTzoom\], we find that the orbital signal is never observed when the positive superhumps are present, but this is not a strong constraint as the positive superhump amplitude swamps that of the orbital signal. More revealing is the interplay between the orbital signal, the negative superhump signal, and the DN outbursts. In Q2 and Q3, the orbital signal appears only when the negative superhump signal is weak or absent. This is consistent with the idea that the addition of material from the accretion stream should bring the disk back to the orbital plane roughly on the mass-replacement time scale [@wb07; @wts09]. The strong negative superhump signal early in Q2 indicates a tilt of $\sim$5$^\circ$, sufficient for the accretion stream to avoid interaction with the disk rim for all phases except those in which the disk rim is along the line of nodes. As the disk tilt declines, however, an increasing fraction of the stream material will impact the disk rim and not the inner disk – in other words, the orbital signal will grow at the expense of the negative superhump signal. This appears to be consistent with the data in hand and if so would suggest that the orbital signal results from the bright spot in V344 Lyr, but the result is only speculative at present. In Figure \[fig: omc200275\] we show the O-C phase diagram for ${P_{\rm orb}}$. We fit 20 cycles for each point in the Figure, and moved the window 10 cycles between fits. The small apparent wanderings in phase result from interference from the other periods present, and also appear to correlate with the outbursts. We show the 2D DFT for days 200 to 275 in Figure \[fig: 2dDFTq4\]. Here we used a window width of 2 days, and shifted the window by 1/8th of a day between transforms. We show amplitude per cadence. The orbital signal appears to be increasing in amplitude slightly during Q4, perhaps as a result of the buildup of mass in the outer disk after several DN outbursts. The large amplitudes found for the orbital signal in Figure \[fig: omc200275\] during outbursts 17 and 19 (starting days $\sim$246.5 and 266, respectively) are spurious, resulting from the higher-frequency signals found on the decline from maximum in each case. As discussed below, outbursts 17 and 19 both show evidence for triggering a negative superhump signal, and the light curve for outburst 19 yields a complex Fourier transform that shows power at the orbital frequency, the negative superhump frequency, and at 12.3 c/d (1.95 hr). Observed Positive Superhumps ---------------------------- The light curve for V344 Lyr is rich in detail, and in particular provides the best data yet for exploring the time evolution of positive superhumps. As discussed above, the superhumps are first driven to resonance during the DN outburst that precedes the superoutburst as the heating wave transitions the outer disk to the high-viscosity state allowing the resonance to be driven to amplitudes that can modulate the system luminosity. Close inspection of the positive superhumps in Figures \[fig: reslc1\] and \[fig: reslc2\] shows that in both cases the amplitude of the superhump is initially quite small, but grows to saturation ($A\sim0.25$ mag) in roughly 16 cycles. There is a signal evident preceding the second superoutburst (days $\sim$156.5 to 161) – this is a blend of the orbital signal and a very weak negative superhump signal. The mean superhump period obtained by averaging the results from non-linear least squares fits to the disk superhump signal during the two superoutburst growth through plateau phases is $P_+ = 0.091769(3)\rm\ d = 2.20245(8) hr$. The errors quoted for the last significant digit are the [*formal*]{} errors from the fits summed in quadrature. The periods drift significantly during a superoutburst, however, indicating these formal error estimates should not be taken seriously. Using the periods found for the superhumps and orbit, we find a period excess of $\epsilon_+ = 4.4\%$. We plot the result for V344 Lyr with the results from the well-determined systems below the period gap listed in Table 9 of @patterson05 in Figure \[fig: epsvporb\]. The period excess for V344 Lyr is consistent with the existing data. In Figures \[fig: sh1panave\] and \[fig: sh2panave\] we show the time evolution of the mean pulse shape for the first and second superoutbursts. To create these Figures, we split the data into 5-day subsets ($\sim$50 cycles), with an overlap of roughly 2.5 days from one subset to the next. For each subset we computed a discreet Fourier transform and then folded the data on the period with the most power. The evolution of the mean pulse shape is similar to results published previously [e.g. @patterson03; @kato09; @kato10], however the quality of the [[*Kepler *]{}]{} data is such that we can test vigorously the model that has been slowly emerging in the past few years for the origin of the superhump light source, the evolution of the pulse shape and the physical origin of late superhumps. A comparison of the simulation light curve from Figure \[fig: sph+\] with the early mean pulse shapes shown in Figures \[fig: sh1panave\] and \[fig: sh2panave\] reveals a remarkable similarity, all the more remarkable given the very approximate nature of the artificial viscosity prescription used in the SPH calculations and the crude way in which the simulation light curves are calculated. If the comparison between data and model is correct, the SPH simulations illuminate the evolution of the positive superhumps from the early disk-dominated source to the late stream-dominated source. The signal observed early in the superoutburst is dominated by disk superhumps, where the disk at resonance is driven into a large-amplitude oscillation, and viscous dissipation in the strongly convergent flows that occur once per superhump cycle yield the characteristic large-amplitude superhumps seen in the top panels of Figures \[fig: sh1panave\] and \[fig: sh2panave\]. After $\sim$100 cycles ($\sim$10 d), a significant amount of mass has drained from the disk, and in particular from the driving region. The disk continues to oscillate in response to the driving even after it has transitioned back to the quiescent state, but the driving is off-resonance and the periodic viscous dissipation described above is much reduced. Thus, we agree with previous authors that the late/quiescent superhumps that have been observed result from the dissipation in the bright spot as it sweeps around the rim of the non-axisymmetric disk. To compute O-C phase diagrams for each superoutburst, we fit a 3-cycle sine curve with the mean period of 2.196 hr which yields a relatively constant O-C during the plateau phase. The results are shown in Figures \[fig: sh1omc\] and \[fig: sh2omc\]. The top panel shows the residual light curve as well as the SAP light curve smoothed with a window width of $P_+$ (135 points). The second panel shows the O-C phase diagram, and the third panel the amplitude of the fit. Also included in this Figure in the fourth panel are the periods of the positive superhumps during 2-day subsets of the residual light curve obtained with Fourier transforms. The horizontal bars show the extent of each data window. By differencing adjacent periods, we calculate the localized rate of period change of the superhumps $\dot P_+$. These results are shown in the bottom panel. As perhaps might be expected from the similarity in the evolution of the mean pulse profile during the two superoutbursts, the O-C phase diagrams as well as the evolution of the periods and localized rates of period change are also similar. Such diagrams can be illuminating in the study of superhumps, and @kato09 and @kato10 present a comprehensive population analysis of superhumps using this method. When the disk is first driven to oscillation in the growth and saturation phase, there is maximum mass at large radius, and the corresponding superhump period ($\sim$2.25 hr) is significantly longer than the mean, yielding a positive slope in the O-C diagram. The rate of period change estimated from the first 4 days of data for both superoutbursts is $\dot P_+ = -8\times 10^{-4}\ \rm s\ s^{-1}$. Roughly 10 cycles ($\sim1$ d for V344 Lyr) after the mode saturates with maximum amplitude, sufficient mass has drained from the outer disk that the superhump period has decreased to the mean, and the superhump period continues to decrease out to $E\sim100$ as the precession rate slows as a result of the decreasing mean radius of the flexing, non-axisymmetric disk. The period at this time is roughly 2.19 hr for both superoutbursts, and the rate of period change between cycles 30 and 70 which includes the early plateau phase before the stream signal becomes important is $\dot P_+ = -1.8\times 10^{-4}\ \rm s\ s^{-1}$. Between cycles $\sim$110 and 150, the O-C phase diagrams in Figures \[fig: sh1omc\] and \[fig: sh2omc\] show phase shifts of $\sim$0.5 cycles. This is the result of the continued fading of the disk superhump, and the transition to the stream/late superhump signal. Careful inspection of the top panels of Figures \[fig: sh1omc\] and \[fig: sh2omc\] near days 68 and 174 in fact shows the decreasing amplitude of the disk superhump, and the relatively constant amplitude of the stream superhump. By cycle $\sim150$ (days $\sim$72 and 176), the disk superhump amplitude is negligible, and all that remains is the signal from the stream superhump. The smoothed SAP light curve shown in the top panel shows that these times correspond to the return to the quiescent state during which the global viscosity is again low. It is also interesting that $\dot P_+$ itself appears to be increasing relatively linearly during much of the plateau phase with an average rate of $\ddot P \sim$$10^{-9}\rm\ s^{-1}$. At present this is not explained by the numerical simulations. It may simply be that this result reflects the growing relative importance of the stream superhump signal on the phase of the 3-cycle sine fit. This is almost certainly the case during the period peaks found at days $\sim$71 and 175, where we find that the sine fits are pulled to longer period by the complex and rapidly changing waveform (e.g., Figure \[fig: trans\]). In the quiescent interval before the first subsequent outburst the O-C diagram shows a concave-downward shape indicating a negative ${\dot P_+}\sim -2\times10^{-4}\ \rm s\ s^{-1}$. We speculate that the behavior of the O-C curve in response to the outburst following the first superoutburst may indicate that the outburst may effectively expand the radius of the disk causing a faster apsidal precession. Unfortunately, there is a gap in the [[*Kepler *]{}]{}data that starts just after the initial rise of the outburst following the second superoutburst. The value of ${\dot P_+}$ averaged over the the last 2 measured bins for both superoutbursts is ${\dot P_+}\sim -3\times10^{-4}\ \rm s\ s^{-1}$. The measured values of ${\dot P_+}$ for V344 Lyr are consistent with those reported in the extensive compilation of @kato09. To make a direct comparison with Kato et al., who calculate ${\dot P_+}$ over the first 200 cycles (i.e., plateau phase), we average all the ${\dot P_+}$ measurements out to the drop to quiescence, and find an average value of $-6\times10^{-5} \ \rm s\ s^{-1}$ for the first superoutburst and $-9\times10^{-5} \ \rm s\ s^{-1}$ for the second. These values for V344 Lyr are entirely consistent with the Kato et al. results as shown in their Figure 8. In @still10 we noted that V344 Lyr was unusual (but not unique) in that superhumps persist into quiescence and through the following outburst in Q2. Other systems that have been observed to show (late) superhumps into quiescence more typically have short orbital periods, including V1159 Ori [@patterson95], ER UMa [@gao99; @zhao06], WZ Sge [@patterson02wzsge], and the WZ Sge-like star V466 And [@chochol10], among others. The identification of late superhumps is a matter of contention in some cases [@kato09], and the post-superoutburst coverage of targets is more sparse than the coverage during superoutbursts. Thus it is difficult to know if post-superoutburst superhumps are common or rare at this time. Observed Negative Superhumps ---------------------------- As noted above in §2.2, the 2.06-hr (11.4 c/d) signal that dominates the light curve for the first $\sim$35 days of Q2 is now understood to be the result of a negative superhump. This yields a value for the period deficit (Equation \[eq: eps-\]) of $\epsilon_- = 2.5$%. The maximum amplitude at quiescence is $A\sim0.8$ mag. Figure \[fig: aveneglc\] shows 10 cycles of the negative superhump signal during this time. The inset shows the mean pulse shape averaged over days 5 to 25 (roughly 230 cycles). The signal is approximately sawtoothed with a rise time roughly twice the fall time. It appears consistent with the pulse shapes @wb07 obtained using ray-trace techniques on 3D simulations of tilted disks (their Figure 3). Negative superhumps dominate the power in days $\sim$2–35 and again in days $\sim$100–160. The signal observed near the beginning of Q2 reveals a remarkably large rate of period change – large enough that it can be seen in the harmonics of the Fourier transform shown in Figure \[fig: 2dDFT\] as a negative slope towards lower frequency with time. A nonlinear least squares fit to the fundamental period measured during days 2.5-7.5 yields $P_-=2.05006\pm0.00005$ hr. A fit to the data from days 22–26, however, yields $P_-=2.06273\pm0.00005$ hr. The formal errors from non-linear least squares fits underestimate the true errors by as much an order of magnitude [@mo99], but even if this is the case, these two results differ by $\sim$25$\sigma$. Taken at face value, they yield a rate of period change of $\dot P_- \sim 3\times10^{-5}\rm\ s\ s^{-1}$. Similarly, we fit the negative superhump periods in two 4-day windows centered on days 112.0 and 121.0. The periods obtained from non-linear least squares are $P_- = 2.0530 \pm 0.0002$ hr and $P_- = 2.066038 \pm 0.00008$ hr, respectively, which yields $\dot P_- \sim 6\times10^{-5}\rm\ s\ s^{-1}$ over this time span. In their recent comprehensive analysis of the evolution of CVs as revealed by their donor stars, @knigge11 estimate that for systems with ${P_{\rm orb}}\sim2$ hr the rate of orbital period change should be $\dot {P_{\rm orb}}\sim-7\times10^{14}\rm\ s\ s^{-1}$ (see their Figure 11). Clearly the $\sim$2.06-hr signal cannot be orbital in origin. In some negatively superhumping systems with high inclinations, the precessing tilted disk can modulate the mean brightness [e.g. @stanishev02]. We found no significant signal in the Fourier transform at the precession period of $\sim$3.6 d. In Figure \[fig: negshomc\] we show the results of the O-C analysis for the Q2 data. To create the Figure, we fit 5-cycle sine curves of period 2.05 hr to the residual light curve, shifting the data by one cycle between fits. The shape of the O-C diagram is concave up until the peak of the first outburst at day $\sim$28 indicating that the period of the signal is lengthening during this time span. The magnitude of the negative superhump period deficit is inversely related to the retrograde precession period of the tilted disk – a shorter precession period yields a larger period deficit. A disk that was not precessing at all would show a negative superhump period equal to the orbital period. The observation that the negative superhump period in V344 Lyr is lengthening during days $\sim$2 to 27 indicates that the precession period of the tilted disk is increasing (i.e., the rate of precession is decreasing). Coincident with the first DN outburst (outburst 1) in Q2, there is a cusp in the O-C diagram, indicating a jump to shorter period (faster retrograde precession rate). The amplitude of the signal begins to decline significantly following outburst 1, and the signal is effectively quenched by outburst 2. Note that between days $\sim$28 and 35 the O-C diagram is again concave up, although with less curvature than before outburst 1. We show the 2D DFT of the pre-superoutburst Q2 data in Figure \[fig: 2dDFTq2\]. Here we used a window width of 2 days that was shifted 1/8 day between transforms. We plot the amplitude in counts per cadence. It is evident that outburst 1 shifts the oscillation frequency, as well quenching the amplitude of the signal. Outburst 2 triggers a short-lived signal with a period of roughly 11.9 c/d (2.02 hr), and outburst 3 appears to generate signals near the frequencies of the negative and positive superhumps that rapidly evolve to higher and lower frequencies, respectively, only to fade to to noise background by the end of the outburst. Outburst 3 has a somewhat slower rise to maximum than most of the outbursts in the time series and is the last outburst before the first superoutburst, but is otherwise unremarkable. This is the only time we see this behavior in the 3 quarters of data we present, so it is unclear what the underlying physical mechanism is. Although much of the Q3 light curve is dominated by the negative superhump signal, the amplitude is much lower than early in Q2, and in addition there is contamination from the orbital and positive superhump signals. In Figure \[fig: 2dDFTq3\] we show the 2D DFT for the Q3 data between days 93 and 162, again showing the amplitude in counts per cadence versus time and frequency. We used a window width of 2 days that was shifted 1/8 day between transforms. In Figure \[fig: negshomc2\] we show the O-C phase diagram obtained by fitting a 5-cycle sine curve of period 2.06 hr to data spanning days 93.2 to 140.0. The amplitude during this time is considerably smaller than was the case for the Q2 negative superhumps. Before day 106, there appears to be contamination from periodicities near the superhump frequency of 10.9 c/d that are evident in Figure \[fig: 2dDFTq3\], and after day 126 the signal fades dramatically. It was only during days 106.5 to 123.2 that the amplitude of the negative superhump signal was large enough, stable enough, and uncontaminated to yield a clean O-C phase diagram. These data lie between outbursts 8 and 9, and comprise the longest quiescent stretch in Q3. It can be seen that the O-C curve is again concave upward indicating a positive rate of period change as calculated above, and the bottom panel indicates that the amplitude of the signal is increasing during this time span. The retrograde precession rate of a tilted accretion disk is a direct function of the effective (mass weighted) radius of the disk. Several groups have studied the precession properties of tilted disks [@papterq95; @larwood96; @larwood98; @lp97; @lai99]. @papaloizou97 derived the following expression for the induced precession frequency $\omega_p$ of a tilted accretion disk, $$\omega_p = -{3\over 4}{GM_2\over a^3} {\int\Sigma r^3\, dr\over \int \Sigma\Omega r^3\, dr}\,\cos\delta \label{eq: pt95}$$ where $\omega_p$ is the leading-order term of the induced precession frequency for a differentially rotating fluid disk, calculated using linear perturbation theory, $\Sigma(r)$ is the axisymmetric surface density profile and $\Omega(r)$ the unperturbed Keplerian angular velocity profile, $a$ is the orbital separation, $M_2$ is the mass of the secondary, and $\delta$ is the tilt of the disk with respect to the orbital plane. The integrals are to be taken between the inner and outer radii of the disk. In a later study of the precession of tilted accretion disks, @larwood97 [and see Larwood (1998)] derived the expression for the precession frequency of a disk with constant surface density $\Sigma$ and polytropic equation of state with ratio of specific heats equal to 5/3: $${\omega_p\over\Omega_0} = -{3\over 7}q\left({R_0\over a}\right)^3\cos\delta, \label{eq: larwood}$$ where here $\Omega_0$ is the Keplerian angular velocity of the outer disk of radius $R_0$, and $q$ is the mass ratio. The physical interpretation of Equations \[eq: pt95\] and \[eq: larwood\] is that tilted accretion disks weighted to larger radii will have higher precession frequencies than those weighted to smaller radii. For example, if we have 2 disks with the same nominal tilt and total mass, where one has a constant surface density and the other with a surface density that increases with radius, the second disk will have a higher precession rate, and would yield a negative superhump frequency higher than the first. A third disk with most of its mass concentrated at small radius would have a lower precession frequency and yield a negative superhump signal nearest the orbital signal. In this picture the increasing precession period indicated by the positive rate of period change for the negative superhump signal $\dot P_-$ might at first seem counter-intuitive since the disk is gaining mass at quiescence. However, the key fact is that tilted disks accrete most of their mass at [*small*]{} radii, since the accretion stream impacts the face of the tilted disk along the line of nodes [@wb07; @wts09]. The accretion stream impacts the rim of the disk only twice per orbit (refer back to Figure \[fig: sph-\]). Thus, the effective (mass weighted) radius of an accreting tilted disk [*decreases*]{} with time, causing a slowing in the retrograde precession rate $\omega_p$, and an increase in the period of the negative superhump signal $P_-$. A detailed analysis of the data, theory, and numerical model results should allow us to probe the time evolution of the mass distribution in disks undergoing negative superhumps, and hence the low-state viscosity mechanism. The unprecedented quality and quantity of the [[*Kepler *]{}]{}time series data suggests that V344 Lyr and perhaps other [[*Kepler *]{}]{}-field CVs that display negative superhumps may significantly advance our understanding of the evolution of the mass distribution in tilted accretion disks. The cause of disk tilts in CVs is still not satisfactorily explained. In the low-mass x-ray binaries it is believed that radiation pressure can provide the force necessary to tilt the disk out of the orbital plane [@petterson77; @ip90; @foulkes06; @ip08], however this mechanism is not effective in the CV scenario. @bow88 suggested in their work on TV Col that magnetic fields near the L1 region might deflect the accretion stream out of the orbital plane, but as noted in @wb07 the orbit-averaged angular momentum vector of a deflected stream would still be parallel to the orbital angular momentum variable. @murrayea02 demonstrated numerically that a disk tilt could be generated by instantaneously turning on a magnetic field on the secondary star. Although their tilt decayed with time (the orbit-averaged angular momentum argument again), their results suggest that changing magnetic field geometries could generate disk tilt. Assuming that the disk viscosity is controlled by the MRI [@bh98; @balbus03], it is plausible that differentially-rotating plasmas may also be subject to magnetic reconnection events (flares) which are asymmetrical with respect to the disk plane, or that during an outburst the intensified disk field may couple to the tilted dipole field on the primary star [e.g., @lai99] or the field of the secondary star [@murrayea02]. With these ideas in mind, the behavior of V344 Lyr during outbursts 2, 10, 11, 17, and 19 is tantalizing. First, again consider the 2D DFTs from Q2, Q3, and Q4 shown in Figures \[fig: 2dDFTq2\], \[fig: 2dDFTq3\] and \[fig: 2dDFTq4\], respectively. In each of these cases, there is power generated at a frequency consistent with the negative superhump frequency on the decline from maximum light. Outbursts 2 and 10 appear to excite a frequency of roughly 12 c/d ($\sim$2 hr), outburst 17 excites the negative superhump frequency for $\sim$3 days, and outbursts 11 and 19 appear to excite power at the negative superhump frequency that rapidly evolves to shorter frequencies. We show the SAP light curves for these outbursts as well as the residual light curves in Figure \[fig: dnofig\]. The residual light curves for these 5 outbursts all appear to show the excitation of a frequency near or slightly greater than the negative superhump frequency that dominates early in Q2. This is about 1/3 of the normal outbursts in the 3 quarters of [[*Kepler *]{}]{}data – the other 12 outbursts do not show evidence for having excited new frequencies. Thus, while additional data are clearly required and our conclusions are speculative, we suggest that these results support a model in which the disk tilt is generated by the transitory (impulsive) coupling between an intensified disk magnetic field and the field of the primary or secondary star. The fact that these 5 outburst events yield frequencies near 12 c/d appears to support the model that it is the mass in the outer disk that is initially tilted out of the plane. Conclusions =========== We present the results of the analysis of 3 quarters of [[*Kepler *]{}]{}time series photometric data from the system V344 Lyr. Our major findings are: 1. The orbital, negative superhump, and positive superhump periods are ${P_{\rm orb}}=2.11$ hr, $P_- = 2.06$ hr, and $P_+ = 2.20$ hr, giving a positive superhump period excess of $\epsilon_+ = 4.4$%, and a negative superhump period deficit of $\epsilon_- = 2.5$%. 2. The quality of the [[*Kepler *]{}]{}data is such that we can constrain significantly the models for accretion disk dynamics that have been proposed in the past several years. 3. The evolution of the pulse shapes and phases of the positive superhump residual light curve provides convincing evidence in support of the two-source model for positive superhumps. Early in the superoutburst, viscous dissipation in the strongly convergent flows of the flexing disk provide the modulation observed at the superhump frequency. Once the system has returned to quiescence, the modulation is caused by the periodically-variable dissipation at the bright spot as it sweeps around the rim of the still non-axisymmetric, flexing disk. During the transition the O-C phase diagram shows a shift of $\sim0.5$ in phase. 4. Superoutbursts begin as normal DN outbursts. The rise to superoutburst is largely explained by the thermal-viscous limit cycle model discussed in Paper II. Beyond this luminosity source which does a reasonable job of matching the lower envelope of the superoutburst light curve, there is additional periodic dissipation that generates the superhump signals. The sources of the periodic dissipation are (i) the strongly convergent flows that are generated once per superhump cycle as the disk is compressed in the radial direction opposite the secondary, and (ii) the variable depth of the bright spot as it sweeps around the rim of the non-axisymmetric oscillating disk. 5. Numerical experiments that individually isolate the two proposed physical sources of the positive superhump signal yield results that are broadly consistent with the signals in the data. 6. The positive superhumps show significant changes in period that occur in both superoutbursts. The average $\dot P_+ \sim 6\times10^{-5}\rm\ s\ s^{-1}$ for the first superoutburst and $\dot P_+ \sim 9\times10^{-5}\rm\ s\ s^{-1}$ for the second are consistent with literature results. The data reveal that $\dot P_+$ itself appears to be increasing relatively linearly during much of the plateau phase at an average rate for the two superoutbursts of $\ddot P \sim$$10^{-9}\rm\ s^{-1}$. 7. The negative superhumps show significant changes in period with time, resulting from the changing mass distribution (moment of inertia) of the tilted disk. As the mass of the inner disk increases before outburst 1, the retrograde precession period increases, consistent with theoretical predictions. These data are rich with unmined information. 8. Negative superhumps appear to be excited as a direct result of some of the dwarf nova outbursts. We speculate that the MRI-intensified disk field can couple to the field of the primary or secondary star and provide an impulse that tilts the disk out of the orbital plane. Continued monitoring by [[*Kepler *]{}]{}promises to shed light on this important unsolved problem. The system V344 Lyr continues to be monitored at short cadence by the [[*Kepler *]{}]{}mission. It will undoubtedly become the touchstone system against which observations of all other SU UMa CVs will be compared, as the quantity and quality of the time series data are unprecedented in the history of the study of cataclysmic variables. The [[*Kepler *]{}]{}data for V344 Lyr promise to reveal details of the micro- and macrophysics of stellar accretion disks that would be impossible to obtain from ground-based observations. [[*Kepler *]{}]{}was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate. All of the data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NAG5-7584 and by other grants and contracts. This research was supported in part by the American Astronomical Society’s Small Research Grant Program in the form of page charges. We thank Marcus Hohlmann from the Florida Institute of Technology and the Domestic Nuclear Detection Office in the Dept. of Homeland Security for making computing resources on a Linux cluster available for this work. We thank Joseph Patterson of Columbia University for sending us the data used in Figure 19 in electronic form. [*Facilities:*]{} Ak, T., Bilir, S., Ak, S., & Eker, Z. 2008, New Astronomy, 13, 133 Balbus, S. A. 2003, , 41, 555 Balbus, S. A., & Hawley, J. F. 1998, Reviews of Modern Physics, 70, 1 Barrett, P., O’Donoghue, D., & Warner, B. 1988, , 233, 759 Bildsten L., Townsley D. M., Deloye C. J., Nelemans G., 2006, ApJ, 640, 466 Bonnet-Bidaud, J. M., Motch, C., & Mouchet, M. 1985, , 143, 313 Borucki, W. J., et al. 2010, Science, 327, 977 Caldwell, D. A., et al. 2010, , 713, L92 Cannizzo, J. K. 1993, Accretion Disks in Compact Stellar Systems, ed. J. C. Wheeler (Singapore: World Scientific), 6 Cannizzo, J. K. 1993, ApJ, 419, 318 Cannizzo, J. K. 1998, ApJ, 494, 366 Cannizzo, J. K., Still, M. D., Howell, S. B., Wood, M. A., & Smale, A. P. 2010, ApJ, 725, 1393 Cannizzo, J. K., Smale, A. P., Still, M. D., Wood, M. A., & Howell, S. B. 2011, ApJ, submitted Charles, P. A., Kidger, M. R., Pavlenko, E. P., Prokof’eva, V. V., & Callanan, P. J. 1991, , 249, 567 Chochol, D., Katysheva, N. A., Shugarov, S. Y., Volkov, I. M., & Andreev, M. V. 2010, Contributions of the Astronomical Observatory Skalnate Pleso, 40, 19 Faulkner, J., Flannery, B. P., & Warner, B. 1972, , 175, L79 Feldmeier, J. J., et al. 2011, , in press (arXiv:1103.3660) Fontaine, G., et al.  2011, , 726, 92 Foulkes, S. B., Haswell, C. A., & Murray, J. R. 2006, , 366, 1399 Frank, J., King, A., & Raine, D. J. 2002, Accretion Power in Astrophysics, by Juhan Frank and Andrew King and Derek Raine, pp. 398. ISBN 0521620538. Cambridge, UK: Cambridge University Press, February 2002., Gao, W., Li, Z., Wu, X., Zhang, Z., & Li, Y. 1999, , 527, L55 Gilliland, R. L., et al. 2010, , 122, 131 Haas, M. R., et al. 2010, ApJL, 713, L115 Harvey, D. A., Skillman, D. R., Kemp, J., Patterson, J., Vanmunster, T., Fried, R. E., & Retter, A. 1998, , 493, L105 Hellier, C. 2001, Cataclysmic Variable Stars: How and Why They Vary, Springer-Praxis Books in Astronomy & Space Sciences: Praxis Publishing Hessman, F. V., Mantel, K.-H., Barwig, H., & Schoembs, R. 1992, , 263, 147 Howell, S. B., Reyes, A. L., Ashley, R., Harrop-Allin, M. K., & Warner, B. 1996, , 282, 623 Hynes, R. I., et al. 2006, , 651, 401 Iping R. C., Petterson J. A., 1990, A&A, 239, 221 Ivanov P. B., Papaloizou J. C. B., 2008, MNRAS, 384, 123 Jenkins, J. M., et al.  2010, , 713, L87 Kato, T. 1993, , 45, L67 Kato, T., Poyner, G., & Kinnunen, T. 2002, , 330, 53 Kato, T., et al. 2009, , 61, 395 Kato, T., et al. 2010, , 62, 1525 Kim, Y., Andronov, I. L., Cha, S. M., Chinarova, L. L., & Yoon, J. N. 2009, , 496, 765 Knigge, C., Baraffe, I., & Patterson, J. 2011, , 194, 28 Koch, D. G., et al. 2010, , 713, L79 Kunze, S. 2002, in ASP Conf. Series 261: The Physics of Cataclysmic Variables and Related Objects, eds. B.T. Gänsicke, K. Beuermann, & K. Reinsch, 497 Kunze, S. 2004, Revista Mexicana de Astronomia y Astrofisica Conference Series, 20, 130 Lai, D. 1999, , 524, 1030 Larwood, J. D. 1997, , 290, 490 Larwood, J. 1998, , 299, L32 Larwood, J. D., Nelson, R. P., Papaloizou, J. C. B., & Terquem, C. 1996, , 282, 597 Larwood, J. D., & Papaloizou, J. C. B. 1997, , 285, 288 Lasota, J.-P. 2001, New Astron. Rev., 45, 449 Mineshige, S., Hirose, M., & Osaki, Y. 1992, , 44, L15 Montgomery, M. H., & Odonoghue, D. 1999, Delta Scuti Star Newsletter, 13, 28 Murray J. R., Chakrabarty D., Wynn G. A., Kramer L., 2002, MNRAS, 335, 247 Nelemans, G. 2005, in ASP Conf. Ser. 330, The Astrophysics of Cataclysmic Variables and Related Objects, ed. J.-M. Hameury & J.-P. Lasota (San Francisco: ASP), 27 Nelemans, G., Steeghs, D., & Groot, P. J. 2001, , 326, 621 O’Donoghue, D., & Charles, P. A. 1996, , 282, 191 Osaki, Y. 1985, , 144, 369 Osaki, Y. 1989, , 41, 1005 Papaloizou, J. C. B., Larwood, J. D., Nelson, R. P., & Terquem, C. 1997, Accretion Disks - New Aspects, 487, 182 Papaloizou, J. C. B., & Terquem, C. 1995, , 274, 987 Patterson, J. 1999, in Disk Instabilities in Close Binary Systems, eds. S. Mineshige and J. C. Wheeler, (Kyoto: Universal Acad. Press), 61 Patterson, J., Halpern, J., & Shambrook, A. 1993, , 419, 803 Patterson, J., Jablonski, F., Koen, C., O’Donoghue, D., & Skillman, D. R. 1995, , 107, 1183 Patterson, J., Kemp, J., Jensen, L., Vanmunster, T., Skillman, D. R., Martin, B., Fried, R., & Thorstensen, J. R. 2000, , 112, 1567 Patterson, J., Sterner, E., Halpern, J. P., & Raymond, J. C. 1992, , 384, 234 Patterson, J., Thomas, G., Skillman, D. R., & Diaz, M. 1993, , 86, 235 Patterson, J., et al. 2002, , 114, 65 Patterson, J., et al. 2002, , 114, 721 Patterson, J., et al. 2003, , 115, 1308 Patterson, J., et al. 2005, , 117, 1204 Petterson J. A., 1977, ApJ, 216, 827 Provencal, J. L., et al. 1995, , 445, 927 Retter, A., Leibowitz, E. M., & Ofek, E. O. 1997, , 286, 745 Retter, A., Chou, Y., Bedding, T. R., & Naylor, T. 2002, , 330, L37 Roelofs, G. H. A., Groot, P. J., Nelemans, G., Marsh, T. R., & Steeghs, D. 2007, , 379, 176 Rolfe, D. J., Haswell, C. A., & Patterson, J. 2001, , 324, 529 Schoembs, R. 1986, , 158, 233 Simpson, J. C., & Wood, M. A. 1998, , 506, 360 Skillman, D. R., Harvey, D., Patterson, J., & Vanmunster, T. 1997, , 109, 114 Skillman, D. R., Patterson, J., Kemp, J., Harvey, D. A., Fried, R. E., Retter, A., Lipkin, Y., & Vanmunster, T. 1999, , 111, 1281 Smak, J. 1967, Acta Astron., 17, 255 Smak, J. 2007, Acta Astron., 57, 87 Smak, J. 2008, Acta Astron., 58, 55 Smak, J. 2009, Acta Astron., 59, 121 Smak, J. 2010, Acta Astron., 60, 357 Smak, J. 2011, Acta Astron., 61, 59 Smith, A. J., Haswell, C. A., Murray, J. R., Truss, M. R., & Foulkes, S. B. 2007, , 378, 785 Solheim, J.-E. 2010, , 122, 1133 Stanishev, V., Kraicheva, Z., Boffin, H. M. J., & Genkov, V. 2002, , 394, 625 Sterken, C., Vogt, N., Schreiber, M. R., Uemura, M., & Tuvikene, T. 2007, , 463, 1053 Still, M., Howell, S. B., Wood, M. A., Cannizzo, J. K., & Smale, A. P. 2010, , 717, L113 Templeton, M. R., et al. 2006, , 118, 236 Van Cleve, J., ed. 2010, Kepler Data Release Notes 6, KSCI-019046-001. Vogt, N. 1982, , 252, 653 Warner, B. 1995a, Cataclysmic Variable Stars (Cambridge: Cambridge University Press) Warner, B. 1995b, , 225, 249 Whitehurst, R. 1988, , 232, 35 Wood, M. A., et al. 2005, , 634, 570 Wood, M. A., & Burke, C. J. 2007, , 661, 1042 Wood, J., Horne, K., Berriman, G., Wade, R., O’Donoghue, D., & Warner, B. 1986, , 219, 629 Wood, M. A., Montgomery, M. M., & Simpson, J. C. 2000, , 535, L39 Wood, M. A., Thomas, D. M., & Simpson, J. C. 2009, , 398, 2110 Woudt, P. A., Warner, B., Osborne, J., & Page, K. 2009, , 395, 2177 Zhao, Y., Li, Z., Wu, X., Peng, Q., Zhang, Z., & Li, Z. 2006, , 58, 367 [^1]: For completeness, we note that recently Smak (2009, 2011) has proposed that the standard model, described above, does not explain the physical source of observed superhump oscillations. Instead, he suggests that irradiation on the face of the secondary is modulated, which yields a modulated mass transfer rate $\dot M_{\rm L1}$, which in turn results in modulated dissipation of the kinetic energy of the stream.
1
1.291288
0.906695
<?php namespace Guzzle\Plugin\Cache; use Guzzle\Common\Exception\InvalidArgumentException; use Guzzle\Http\Message\RequestInterface; /** * Determines a request's cache key using a callback */ class CallbackCacheKeyProvider implements CacheKeyProviderInterface { /** * @var \Closure|array|mixed Callable method */ protected $callback; /** * @param \Closure|array|mixed $callback Callable method to invoke * @throws InvalidArgumentException */ public function __construct($callback) { if (!is_callable($callback)) { throw new InvalidArgumentException('Method must be callable'); } $this->callback = $callback; } /** * {@inheritdoc} */ public function getCacheKey(RequestInterface $request) { return call_user_func($this->callback, $request); } }
1
0.893901
0.999948
Social Mining using R 1. 200 tweets are extracted from hashtag “#california” and 200 from hashtag “#newyork”. 2. Then create 2 corpus from the 2 datasets. 3. Preprocess the corpus using {tm} package from R. 4. Compute and display the most frequent terms (words) in each corpus. 5. Create 2 word clouds from the most frequent terms. 6. Compute the sentiment scores, i.e. determine whether words used in the tweets are more positively or negatively charged (emotionally). ![sentimentscores.png](/site_media/media/7d5429b20e891.png) ###Sentiment scores summary:### In general, tweets from both states have positive sentiments. However, it seem like tweets from #california appear to have a more negative connotation than #newyork. ## Facebook API ## 1. Consume 100 most recent Facebook posts by user “joebiden” using getPage() from R’s {RFacebook} package. a. Find the most liked post and it’s popularity. b. Find the most commented post and the number of comments. c. Create a word cloud based on the most popular words used in the most commented post. 2. Consume 100 most recent Facebook posts containing the word “petaluma” using searchPages(). a. Rank the most frequent words and display a barplot of it.
3
0.906596
0.999989
/7 - 21. Find j, given that y(j) = 0. 0, 1, 3 Factor -8/13*w**3 + 2/13*w**2 - 10/13*w**4 + 0*w + 0. -2*w**2*(w + 1)*(5*w - 1)/13 Let p(g) be the second derivative of 2*g**6/105 - g**5/35 - g**4/21 + 2*g**3/21 - 11*g. Factor p(t). 4*t*(t - 1)**2*(t + 1)/7 Let a(g) be the third derivative of -1/60*g**5 + 1/210*g**7 + 0*g**3 + 0 + 1/336*g**8 - 1/120*g**6 + 0*g**4 - g**2 + 0*g. Find x such that a(x) = 0. -1, 0, 1 Let o(n) = -4*n**5 + 7*n**4 - 3*n**3 - 5*n**2 + 2*n. Let h(j) = -4*j**5 + 8*j**4 - 4*j**3 - 4*j**2 + 2*j. Let m(l) = -3*h(l) + 2*o(l). Factor m(s). 2*s*(s - 1)**3*(2*s + 1) Let k(q) be the first derivative of q**4/4 + q**3/3 - 3. Factor k(f). f**2*(f + 1) Let y(d) = 5*d**3 - 2*d**2 + 2*d - 1. Let o be y(1). Solve 26*x**o + 61*x**4 - 27*x**2 - 8*x + 2*x - 63*x**5 + 9*x**3 + 0*x = 0. -1/3, -2/7, 0, 1 Let s = 146 - 144. Suppose 8/9*l**s + 2/9 + 10/9*l = 0. What is l? -1, -1/4 Let j(p) = p**4 + p**3 + p**2 - p. Let q(o) = 10*o**4 + 12*o**3 - 6*o**2 - 8*o. Let w(n) = -4*j(n) + q(n). Factor w(v). 2*v*(v - 1)*(v + 2)*(3*v + 1) Let k = 89 - 444/5. Let u be 1 + (-4)/2 - -1. Factor -2/5*n**3 + k*n**2 + u + 0*n + 1/5*n**4. n**2*(n - 1)**2/5 Suppose -15 = -7*b + 2*b. Factor -3*r**4 + 0*r**3 + 2*r**3 + 4*r**4 - b*r**2 + 4*r**2. r**2*(r + 1)**2 Let u(t) = t + 4. Let v be u(0). Let h(j) be the second derivative of 1/6*j**2 + 0 + 1/36*j**v + j - 1/9*j**3. Determine g, given that h(g) = 0. 1 Suppose t**4 - 13*t - 6*t**3 - 19*t - 5 + 48*t - 10*t + 4*t**2 = 0. What is t? -1, 1, 5 Let o(m) be the third derivative of 1/24*m**4 - 1/30*m**5 + 0*m + 0*m**3 - 2*m**2 + 0 + 1/120*m**6. Solve o(h) = 0. 0, 1 Let c be (-9)/(-6) + (-66)/4. Let s be ((-10)/c)/(1*2). Let -1/3*p**2 + 1/3*p**4 + 0 - 1/3*p + s*p**3 = 0. Calculate p. -1, 0, 1 Let g(h) be the third derivative of -h**9/37800 - h**8/8400 + h**6/900 + h**5/300 + h**4/12 - h**2. Let j(k) be the second derivative of g(k). Factor j(v). -2*(v - 1)*(v + 1)**3/5 Find o such that -6*o**2 - 296*o**3 + 298*o**3 + 4*o**4 - 2*o + 2*o**4 = 0. -1, -1/3, 0, 1 Factor -22*c**2 + 3*c**3 + 6*c**3 + 4*c + 9*c**3. 2*c*(c - 1)*(9*c - 2) Let g be (-6)/(-21) + 12/7. Determine i, given that 2*i**4 + 8*i**2 + i**4 - 4*i**2 + g*i**2 + 9*i**3 = 0. -2, -1, 0 Let p(y) = -22*y - 2. Let u be p(1). Let z = 26 + u. Let -1/3 + x**4 - 2/3*x**z - 1/3*x**5 - 2/3*x**3 + x = 0. What is x? -1, 1 Let g = -111 + 113. Let y(f) be the first derivative of 1/9*f**4 + 0*f**3 + 2/45*f**5 - g - 2/9*f - 2/9*f**2. Determine i, given that y(i) = 0. -1, 1 Let r(q) be the second derivative of q**6/135 - q**5/30 + q**4/27 - 30*q. Factor r(f). 2*f**2*(f - 2)*(f - 1)/9 Suppose -4*f - 2*n - 4 = 12, -3*f - 4*n - 2 = 0. Let w(s) = 6 - 4*s**2 + 0 - 3. Let r(q) = -9*q**2 + 7. Let d(l) = f*r(l) + 14*w(l). Factor d(c). -2*c**2 Let g(y) = -8*y**2 + 4*y - 5. Suppose -5*f - 5*i - 5 = 0, -3*i - 12 = i. Let h = -1 + f. Let q(m) = -m**2 - 1. Let w(p) = h*g(p) - 4*q(p). Factor w(c). -(2*c - 1)**2 Factor 3*n + 3*n + 3*n**2 + 6 + 3*n. 3*(n + 1)*(n + 2) Let c(y) = -y**2 - 6*y + 16. Let i be c(-7). Let b be ((-18)/21)/(i/(-42)). Factor 0 + 0*w - 2*w**2 + 7/4*w**5 + 13/2*w**b + 5*w**3. w**2*(w + 2)**2*(7*w - 2)/4 Let r = -37 - -78. Let -r - 4*x**2 + 0*x + 5 - 2*x - 22*x = 0. Calculate x. -3 Factor 0*y**2 + 0 + 0*y + 2/3*y**3. 2*y**3/3 Factor -s + 8 - 7 - 2*s**2 + 13 + 13*s. -2*(s - 7)*(s + 1) Let p(v) = -v**5 + v**4 - v**3 - v**2. Let g(i) = 2*i**5 - 14*i**4 + 5*i**3 + 11*i**2 + i - 1. Let w(z) = g(z) + 6*p(z). Factor w(s). -(s + 1)**3*(2*s - 1)**2 Let c be ((-94)/3)/((-4)/(-6)). Let h be -3*(c/3 - 1). Factor 5 - h*f**4 + 8*f - 3 + 90*f**3 - 48*f**2 - 2. -2*f*(f - 1)*(5*f - 2)**2 Suppose 0 = 7*j - 11*j. Let s(a) be the third derivative of 0 - 1/21*a**3 - 1/28*a**4 - 1/420*a**6 - 1/70*a**5 + j*a + 3*a**2. Factor s(u). -2*(u + 1)**3/7 Let y = -2/21 - 55/84. Let n = -1/2 - y. Factor -1/4*a**2 + 0*a + n. -(a - 1)*(a + 1)/4 Suppose 0 = j + 2 - 7. Suppose j*y + 4*c - 10 = 0, 0 = -3*y + 3*c + c + 6. Factor -2*b**4 + 0*b**2 - 3*b**2 + 3*b**2 - 1 - b + b**3 + 3*b**y. -(b - 1)**2*(b + 1)*(2*b + 1) Let u(r) = r**3 - 4*r**2 - 3*r + 2. Suppose -3*j + 5 = -1. Let b(w) = -w**3 + w**2 + w - 1. Let y(d) = j*b(d) + u(d). Suppose y(m) = 0. Calculate m. -1, 0 Let f(y) be the second derivative of y**7/168 + y**6/60 - y**5/80 - y**4/24 - 6*y. Factor f(z). z**2*(z - 1)*(z + 1)*(z + 2)/4 Suppose 2*h - 16 = -2*h. Suppose -4*c = 4*t, h*c + 5*t + 2 + 0 = 0. Factor 4*i**3 - 3*i**3 - 7*i**2 + 8*i**c. i**2*(i + 1) Let q(c) be the first derivative of -c**3/4 + c**2/2 - c/4 + 4. Factor q(h). -(h - 1)*(3*h - 1)/4 Let j be ((-11)/33)/(1/(-6)). Let x(p) be the second derivative of 3*p + 0 + 0*p**j - 1/4*p**4 + 1/2*p**3. Factor x(u). -3*u*(u - 1) Let g(y) be the first derivative of -5 + 5/8*y**2 - 1/2*y - 1/3*y**3 + 1/16*y**4. Solve g(r) = 0. 1, 2 Let b(o) = 2*o - 3. Let r be b(5). Factor -r + k**3 + 7. k**3 Find h, given that h**4 - 16*h**4 - 3*h + 5*h**3 + 40*h**2 + 23*h = 0. -1, -2/3, 0, 2 Let o(z) be the first derivative of 0*z**2 + 1/3*z**4 + 2 + 2/9*z**3 + 2/15*z**5 + 0*z. Determine r, given that o(r) = 0. -1, 0 Suppose -8*p + 2/3*p**4 + 8/3 - 4*p**3 + 26/3*p**2 = 0. What is p? 1, 2 What is s in -15*s**3 + 22*s**3 + 0*s**2 + 3*s**2 - s**2 = 0? -2/7, 0 Let o(d) = 3*d**3 + 2*d**2 + 2. Let l(r) = r**3 + 1. Let u(j) = -10*l(j) + 5*o(j). Find k such that u(k) = 0. -2, 0 Determine t, given that -2/5 + 0*t**2 - 1/5*t**3 + 3/5*t = 0. -2, 1 Suppose 5*f + 4 = 29. Let n(r) = -r**2 + 6*r - 5. Let k be n(f). Suppose 2/5 + k*w**2 - 2/5*w**4 - 4/5*w**3 + 4/5*w = 0. What is w? -1, 1 Let n(z) be the first derivative of z**7/42 + z**6/10 + z**5/10 - 6*z - 5. Let k(d) be the first derivative of n(d). Find i such that k(i) = 0. -2, -1, 0 Let -20*y**4 + 4*y**4 + 60*y**3 - 6*y + 8 + 18*y - 64*y**2 = 0. What is y? -1/4, 1, 2 Let x(k) be the first derivative of -k**5/50 + k**4/20 + 3. Determine a so that x(a) = 0. 0, 2 Let r(i) be the second derivative of 0 + 1/12*i**4 + 0*i**2 - 3*i - 1/20*i**5 + 0*i**3. Factor r(j). -j**2*(j - 1) Let v(g) be the second derivative of g**7/21 - 4*g**6/15 + g**5/2 - g**4/3 + 15*g. Factor v(n). 2*n**2*(n - 2)*(n - 1)**2 Let q(y) = 6*y - 2. Let i be q(1). Find p, given that -2/5*p**i - 6*p**2 + 18/5*p + 0 + 14/5*p**3 = 0. 0, 1, 3 Let s(h) be the second derivative of -3/5*h**3 + 4*h + 0 - 1/5*h**4 - 1/50*h**5 + 0*h**2. Factor s(d). -2*d*(d + 3)**2/5 Let n be (2/12)/(10/40). Let m(z) be the third derivative of 0 - n*z**3 - 1/60*z**5 + z**2 - 1/6*z**4 + 0*z. Factor m(g). -(g + 2)**2 Let l(v) be the first derivative of -v**3/15 - v**2/5 - v/5 + 3. Solve l(u) = 0. -1 Let p(v) = v**3 - 5*v**2 + 3. Let k be 0 - 0 - 10/(-2). Let j be p(k). What is m in -4*m**2 + 0*m**2 + 2*m**j + m**2 + m**2 - 2*m + 2*m**4 = 0? -1, 0, 1 Let l(n) be the first derivative of n**3/3 - 7*n**2/2 + 3*n - 2. Let k be l(7). Factor 4*o**4 - 2*o**2 + 4*o**2 - 2*o**k - 6*o**5 - 14*o**4. -2*o**2*(o + 1)**2*(3*o - 1) Suppose -s = p - 8, -5*s = 2*p - 4*s - 11. Let i(j) be the first derivative of 0*j + 2 - 1/6*j**4 + 1/3*j**2 + 4/9*j**p - 4/15*j**5. Let i(n) = 0. What is n? -1, -1/2, 0, 1 Let w(l) = -5*l**3 + 2*l**2 + 3. Let y = -9 - -6. Let n(z) = 6*z**3 - 2*z**2 - 4. Suppose 0 = -t - 2 - 2. Let p(i) = t*w(i) + y*n(i). Solve p(b) = 0 for b. 0, 1 Let q(p) be the third derivative of p**8/168 + 11*p**7/210 + p**6/6 + 11*p**5/60 - p**4/6 - 2*p**3/3 - 10*p**2. Let q(a) = 0. What is a? -2, -1, 1/2 Factor 7/2*s**4 + 0 + 0*s - 9/2*s**5 + s**3 + 0*s**2. -s**3*(s - 1)*(9*s + 2)/2 Let r = -241 - -244. Factor 2/3*u**2 - 1/3*u + 2/3*u**r - 1/3*u**4 - 1/3*u**5 - 1/3. -(u - 1)**2*(u + 1)**3/3 Let g = -1 - -8. Let q = -2 + g. Solve -2*a - 2*a**q + 6*a**3 + 4*a**2 - 2*a**2 - 2*a**4 + a - 3*a = 0 for a. -2, -1, 0, 1 Let i = 81 + -79. Determine f, given that 2 + 1/2*f**i - 2*f = 0. 2 Let -18/11 + 51/11*x + 40/11*x**2 + 7/11*x**3 = 0. What is x? -3, 2/7 Let q(d) be the third derivative of d**5/16 - 7*d**4/24 - d**3/6 + 2*d**2 + 57. Factor q(f). (f - 2)*(15*f + 2)/4 Factor -2/5*s**5 - 4/5*s**4 + 0*s**3 + 0 + 4/5*s**2 + 2/5*s. -2*s*(s - 1)*(s + 1)**3/5 Let g(t) be the third derivative of -t**6/180 + t**4/36 + 3*t**2. What is j in g(j) = 0? -1, 0, 1 Let m(i) = i**3 + 7*i**2 - i - 5. Let u be m(-7). Suppose -3*h - 4*k = -8, 2 + 8 = 5*k. Factor 0 + 2*l**3 + h*l + 4/5*
1
0.038809
0.99997
2015 CEA Winner In Nonprofit: The Harrelson Center Standing on the corner of North Fourth and Princess streets are the remnants of the former New Hanover County Law Enforcement Center. In the past decade, it transformed into The Harrelson Center Inc., an independent nonprofit center focused on providing an affordable home for charitable organizations looking to aid locals in need. Some initially thought the old sheriff’s office building and jail needed to be torn down for fresher construction. But First Baptist Church members thought otherwise. “The idea was really a dream birthed out of the mission work already going on at First Baptist Church,” said Vicki Dull, executive director of The Harrelson Center. With the aid of First Baptist Church and donations by Bobby Harrelson, who asked that the center be named for his late wife, Jo Ann Carter Harrelson, the center opened its doors in 2005. “It’s a business model we sort of developed on our own,” Dull said. “What brought the current partners here is the desire by the board to address the issues of the community.” Each partner works in a collected effort to improve educational and employment opportunities, health care, support systems and affordable housing for both its nonprofits and the community. While The Harrelson Center’s primary aim is to provide for its locals, its staff works diligently to offer an inexpensive home to nonprofits at a time when finding cheap rent can be a difficult task. Currently the center’s nonprofits pay an all-inclusive rental cost, consisting of utilities, parking, and security, at below-market values. The model allows the organizations to better utilize funding for the benefit of those referred to the center, officials said. Grouping the nonprofits together also provides an avenue of marketing and volunteer opportunities for its nonprofit staffers and allows simpler means of group collaboration. In addition, it offers an array of support choices in close proximity for individuals in need. “We strive here to help those who are trying to help themselves,” Dull said. Since its creation, The Harrelson Center’s space has seen several renovations to provide the best environment for its affiliates. This year, The Harrelson Center is undergoing its Unlock Hope Campaign. For the campaign, the center made financial plans to renovate the fourth floor and former jail tower to expand for current groups and add more. By the end of spring, Phoenix Employment Ministry and A Safe Place will be able to serve more people, and three more nonprofits can join The Harrelson Center, officials said. “We look forward to having a shared community space in that new tower that is available to our partners for their fundraising events and support group meetings,” Dull said.
2
1.237752
0.069748
Q: macro for cmidline results in staircase I'm doing a table with booktabs: \documentclass{article} \usepackage{booktabs} \begin{document} \newcommand{\crI}[2]{\cmidrule(#1){#2}} \begin{tabular}{@{}lllll@{}} \toprule \multicolumn{1}{c}{} & a & b & c & d \\ %\cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(l){5-5} % \crI{lr}{2-2} \crI{lr}{3-3} \crI{lr}{4-4} \crI{l}{5-5} q1 & 1 & 2 & 3 & 4 \\ q2 & 1 & 2 & 3 & 4 \\ \bottomrule \end{tabular} \end{document} I made a shortcut \crI for the underrules \cmidrule, but they result in a staircase. With the upper line commented out, I get: With the lower line, however, I get: A: \cmidrule looks ahead to see a following \cmidrule to put them on the same line, the shortcut defeats that. You could duplicate the definition and make it look ahead for \cRI but unless you are doing lots of these, that will probably take more characters than you save using the shortcut
2
0.906636
0.999978
MORPHOLOGICAL CHANGES IN THE LIVER AFTER 8 HOURS OF PRESERVATION BY MACHINE PERFUSION. Patients with refractory cardiac arrest, who have undergone Extracorporeal Life Support (ECLS) for resuscitation, but have not achieved cardiac recovery, can be considered as potential donors (Cardiac Death Donors). In such cases, it takes time to notify and obtain the principle consent of the relatives and finalize the clinical and legal documents. During this time, prior to obtaining consent for the removal of organs, ECLS can be extended. In this case, the extracorporeal circulation implies organ preservation "in situ" until the ethical, religious and organizational problems should be decided. Correspondingly, the identification of safe time terms during which the donor organs do not suffer by the changes not compatible with transplantation is extremely important. We aimed to study the morphological changes in the liver after 8 hours of extracorporeal circulation in experiments. The investigation was performed on 6 sheep with simulated cardiac arrest and undergone 8-hours extracorporeal circulation with own blood by using of new portable perfusion apparatus, made on the basis of a universal volumetric blood pump of our own design. The device was connected to the body through the femoral artery and vein with special cannulas. The biopsy of the liver was performed before the starting of perfusion, and on 4 and 8 hours of the experiment. The histological slices were stained by H&E and were assessed by standard criteria: level of steatosis (large-droplet macrovesicular steatosis [ld-MaS] and/or small-droplet macrovesicular steatosis [sd-MaS]); mononuclear portal inflammatory cell infiltrates; bile ductular proliferation; cholestasis; venous congestion; hepatocellular necrosis. Before the perfusion, no venous congestion, hepatocellular necrosis or ld-MaS were observed; Less than 3% of cells were suffered by sd-MaS; mononuclear portal inflammatory cell infiltrates were found only in several areas. Mild mixed ld-MaS and sd-MaS was found in less than 5 % and 10% of the cells accordingly on the 4 and 8 hours after in vivo Machine perfusion. Similarly the mild venous congestion was present in 1 out of 6 livers after 4-hours perfusion and in 2 out of 6 livers after 8-hours Perfusion. The number of necrotic hepatocytes and portal triads infiltrated with mononuclear cells did not exceed 10% and 15% accordingly. However, there were no differences in the degree of biliary damage - cholestasis or ductular proliferation - correlating with the terms of the experiment. Taking into the consideration all internationally accepted criteria of donor liver histological assessment, 8-hour in vivo perfusion of the liver in Cardiac Death Donors by using of the machine of own design providing the pulsatile blood flow guarantees the satisfactory preservation of liver making it useful for successful transplantation.
2
1.381602
0.920195
Q: Django Rest Framework : Filtering against Table Field value I'm improving my Django Web App with Django Rest API part and I have a question according to filtering against table field value. I have my serializer class like this : class IndividuResearchSerializer(serializers.ModelSerializer) : class Meta : model = Individu fields = [ 'id', 'NumeroIdentification', 'Nom', 'Prenom', 'VilleNaissance', ] My views.py file with this class : class IndividuResearchAPIView(ListAPIView) : permission_classes = (IsAuthenticated,) authentication_classes = (JSONWebTokenAuthentication,) serializer_class = IndividuResearchSerializer def get_queryset(self): queryset = Individu.objects.all() NIU = self.request.query_params.get('NumeroIdentification') queryset = queryset.filter(NumeroIdentification=NIU) return queryset And my pythonic file which let to simulate connexion from another software based to API Rest : import requests mytoken = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyX2lkIjoxLCJ1c2VybmFtZSI6IkFkbWluIiwiZXhwIjoxNTE5NzMxOTAxLCJlbWFpbCI6InZhbGVudGluQGRhdGFzeXN0ZW1zLmZyIiwib3JpZ19pYXQiOjE1MTk3MjgzMDF9.493NzJ4OUEzTKu5bZsZ9UafMwQZHz9pESMsYgfd0RLc" url = 'http://localhost:8000/Api/Identification/search/' NIU = "I-19312-00001-305563-2" response = requests.get(url, NIU = NIU, headers={'Authorization': 'JWT {}'.format(mytoken)}) print(response.text) I would like to enter a NIU value into my request in order to filter my table and return the object according to this NIU. For example, in my database I have this object : I would like to return this object thanks to my API but I don't know if my function get_queryset is well-writen and How I can write my API request. Into my urls.py file, I have : url(r'^search/$', IndividuResearchAPIView.as_view() , name="Research"), So I am not making a filtering by URL. I read these posts in order to get more element : Django REST framework - filtering against query param django rest framework filter and obviously DRF doc : http://www.django-rest-framework.org/api-guide/filtering/#filtering-against-the-current-user A: You need to use this url to filter: http://localhost:8000/Api/Identification/search/?NumeroIdentification=NUA_value. With requests library try to pass it with params argument: response = requests.get(url, params={'NumeroIdentification': NIU}, headers={'Authorization': 'JWT {}'.format(mytoken)}).
1
0.926995
0.755445
Non-alcoholic steatohepatitis (NASH) and hepatocellular carcinoma. Non-alcoholic fatty liver disease (NAFLD) is characterized by an excessive accumulation of fatty acids and triglycerides within the cytoplasm of the hepatocytes of non-alcohol users. The natural history varies according to the initial histological diagnosis. A current consideration is that cryptogenic cirrhosis may be representative of a late stage of non-alcoholic steatohepatitis (NASH), which has lost its features of necroinflammatory activity and steatosis in up to 80% of patients. Since NASH is able to progress to cirrhosis, hepatocellular carcinoma (HCC) development may be an end-stage of this disease. We report below two clinical cases of patients diagnosed with NASH who developed HCC. The relationship between NAFLD and HCC is reviewed.
1
1.881026
0.947098
HHRMA - Various Vacancies at Ibis Styles Bali Kuta Circle Ibis Styles Bali Kuta Circle is Four Stars Hotel centrally located in the Business area Simpang Siur, next to Bali Galleria Shopping Mall and 15 Min form the Ngurah Rai International Airport. The hotel offers 190 rooms and Family Suites with modern interior design and provides All Day Dinning Restaurant, Pool bar, Kids Club, Fitness Center, Internet Corner, Meeting Rooms as well as a Relaxing Lounge. We are urgently required candidates Staff to fill these positions as below: Duty Manager Assistant Excecutive Housekeeper Front Office Supervisor Hotel Revenue Manager Assitant Hotel Revenue Manager General Requirements: Experience at least 1-2 years In the same field Self-driven person in dynamic environment Active person, good personality, positive attitude, hard worker and team player
1
0.475068
0.005668
Q: Select the first array only I have a set of PHP arrays coming from an outside service. I only want the most recent however, how can I get this in PHP. The Array is like this: {"responseCode": 200, "message": "success"} I am splitting it apart like this: foreach ($submissions as $submissions) { print "<p><b>" . $submissions["message"] . "</b><br>"; } The problem is every time I do this, it returns the Message and Response Code from all of the arrays returned by my code. This is obviously due to the fact I am loping it, "for each" but how can I set this to only loop once? A: print "<p><b>" . $submissions[0]["message"] . "</b><br>"; without the foreach...
1
1.04314
0.988068
The present invention relates to dehydrated hydrogels which are useful in the treatment of wounds. A hydrogel is a cross-linked macromelecular network swollen with water or biological fluids. A dehydrated hydrogel is a cross-linked macro-molecular network that will swell to form a hydrogel upon contact with water or biological fluids. Due to their xe2x80x98dehydratedxe2x80x99 condition, dehydrated hydrogels are easy to store and transport. In addition, when applied in the dry state to a wound they behave as superabsorbent materials. According to a first aspect of the present invention there is provided a dehydrated hydrogel incorporating a plasticiser and fibres which have provided cations for cross-linking the dehydrated hydrogel. According to a second aspect of the present invention there is provided a method of producing a dehydrated hydrogel comprising dispersing fibres into an aqueous solution of a hydrogel precursor material incorporating a plasticiser, the fibres incorporating cations which are capable of cross-linking said precursor material to form a hydrogel, and evaporating water to produce a dehydrated hydrogel which incorporates said fibres, the dehydrated hydrogel being cross-linked by said cations. The dehydrated hydrogel may be in the form of a film having a thickness of, for example, 20 microns to 1 mm. The dehydrated hydrogels of the invention have a number of advantages. In particular, the presence of the fibres imparts strength and dimensional stability to the dehydrated hydrogel. Furthermore films of the dehydrated hydrogels have the property of swelling in only the thickness dimensions and not in the other two dimensions (as compared to films of conventional dehydrated hydrogels which swell in all three dimensions). Typically, dehydrated hydrogels in accordance with the invention will comprise (based on the total weight of the fibres, polymer forming the hydrogel, and plasticiser, i.e. excluding water and other components) 15 to 40% by weight of fibres, 10 to 35% by weight of polymer, and 5 to 75% plasticiser. More preferably the fibres and polymer together provide about 40-60% ideally about 50% by weight on the same weight basis so that correspondingly the plasticiser provides 60-40%, ideally about 50%. Generally the amount of fibres will exceed the amount of polymer. For example the weight ratio may be 1.5-3:1. Typically the dehydrated hydrogel will contain less than 50% by weight of water, ideally less than 20%, based on the total weight of the dehydrated hydrogel. Examples of hydrogel precursor material which may be used include sodium alginate, sodium carboxymethyl cellulose, sodium pectinate, sodium O-carboxymethyl chitosan (OCC), sodium N,O-carboxymethyl chitosan (NOCC), sodium polyacrylate, and naturally occurring gums and synthetic polymers containing pendant carboxylic acid groups. The hydrogel precursor may consist wholly or partially of acemannan (or other component of Alloe Vera) which is a natural polymer known to accelerate healing of wounds. The acemannan may, for example, provide up to 80% of the matrix. The acemannan may be clinical grade material obtainable from Carrington Laboratories, Dallas, Tex., U.S.A. The fibres which are used contain a di- or higher valent cation which is effective for cross-linking the hydrogel. Examples of suitable cations include Ca2+, Zn2+, and cations which also act as enzyme cofactors. Particular preferred examples of fibres which may be used are calcium alginate fibres. The fibres will generally have a length of 1 to 80 mm and a thickness of 10 to 50 microns. The fibres may be such that they absorb water from the aqueous solution of the hydrogel precursor material during manufacture of the dehydrated hydrogel. Examples of suitable plasticisers include glycerol, polyethylene glycol, sorbitol and similar sugars, and PLURONIC(copyright) brand PEO/PPO polymers. In a typical method of preparing a dehydrated hydrogel in accordance with the invention, the fibres, polymer and plasticiser in their relative requisite amounts are admixed with water such that the fibres, polymer and plasticiser together provide less than 5% by weight (e.g. less than 3%, e.g. 2%) of the resultant mixture. After thorough mixing, the dispersion may be cast to an appropriate thickness and water evaporated to give a dehydrated hydrogel product containing less than 50% water, more usually 20% or less. Dehydrated hydrogels in accordance with the invention have a number of advantages. In particular when applied to the wounds (e.g. donor sites, abrasions, dermabrasions, surface wounds with high exudate or wide savings in exudate levels) they are capable of absorbing large amounts of exudate, e.g. up to 30 times their own weight, thereby rehydrating to form a hydrogel. If the dehydrated hydrogel is in the form of a film, it is found that the film swells in the thickness dimension without substantial swelling in the other two dimensions. Upon sufficient absorption of exudate, the film is capable of dissolving. The product of the invention is more absorbent than current commercial hydrogels, and is also light and easy to package. Dehydrated hydrogels in accordance with the invention may be laminated to hydrophilic films which have an increased breathability in the presence of liquid water as compared to moisture vapour alone. The use of such a film over the dehydrated hydrogel (i.e. on the side remote from the wound) ensures that water is vented from the dehydrated hydrogel through the film. Therefore the dissolution of the hydrogel may be controlled. Typically the breathable film will be of a material which, as a 50 micron film, has an Moisture Vapor Transfer Rate in the presence of moisture vapour alone of 6,000 to 10,000 g mxe2x88x922 24 hrxe2x88x921 as measured by ASTM E96B and an MVTR in the presence of liquid water (as measured by ASTM E96BW) of 6,000 to 10,000 g mxe2x88x922 24 hrxe2x88x921. Typically the breathable film will have a thickness of 30-70 microns, more preferably 40-60 microns, e.g. about 50 microns. The breathable film may for example be of polyurethane. Suitable films are available from Innovative Technologies Limited under the designations IT325, IT425 and IT625. If desired, the dehydrated hydrogel may incorporate an active agent (e.g. an antimicrobial material) for delivery to a wound.
2
1.418794
0.620091
Aldol-type compounds from water-soluble indole-3,4-diones: synthesis, kinetics, and antiviral properties. A straightforward transformation of indole-3,4-diones is reported. The reaction feasibility is evidenced by kinetic studies on a model substrate, revealing a double phase process with a first faster pseudo-first-order step (i.e., deprotonation of the dione and self-nucleophilic attack of the anion) and a subsequent slower dehydration of the intermediate. The overall process is faster at pH higher than the pK value of the investigated substrate. The biological relevance of new compounds has been assessed in vitro against herpes simplex virus type-1 (HSV-1), showing a more promising biological profile with respect to their precursors.
1
1.758842
0.980949
-ase The suffix -ase is used in biochemistry to form names of enzymes. The most common way to name enzymes is to add this suffix onto the end of the substrate, e.g. an enzyme that breaks down peroxides may be called peroxidase; the enzyme that produces telomeres is called telomerase. Sometimes enzymes are named for the function they perform, rather than substrate, e.g. the enzyme that polymerizes (assembles) DNA into strands is called polymerase; see also reverse transcriptase. The commonly used -ase suffix for naming enzymes was derived from the name diastase. See also Amylase DNA polymerase Category:Chemistry suffixes Category:Biological nomenclature Category:Greek suffixes
3
1.885069
0.965407
Teen charged with DUI after livestreaming deadly car crash Obdula Sanchez was livestreaming herself singing while driving with her sister and another teen — then, after a horrific crash, she turned the video back on to record her sister's death This July 22, 2017, photo provided by the Merced County Sheriff, shows Obdulia Sanchez in Merced, Calif. Sanchez has been arrested in California on suspicion of causing a deadly crash that she recorded live on Instagram. She was booked into the Merced County Jail on suspicion of DUI and vehicular manslaughter after Friday's crash that killed her 14-year-old sister and badly injured another 14-year-old girl.Merced County Sheriff via AP Obdulia Sanchez aimed the camera phone at her face as she rapped along to the song blaring over the radio and tried to control the car she was driving on a road in California’s Central Valley. Then came tragedy, live-streamed in a horrifying Instagram video. The California Highway Patrol told Fox affiliate KTXL that 18-year-old Sanchez lost control of her 2003 Buick, drove off the edge of the road and then overcorrected. The car crashed into a barbed-wire fence and flipped over in a field, according to ABC affiliate KFSN. Sanchez’s 14-year-old sister, Jacqueline, and another teen girl — who were in the back seat and were not wearing seat belts — were ejected from the tumbling car. “Hey, everybody, if I go to %$^& jail for life, you already know why,” she began, adjusting the camera so that it showed her younger sister, motionless and bleeding from the head. My sister is %#@&^ dying. Look, I f&^% love my sister to death “My sister is %#@&^ dying. Look, I f&^% love my sister to death. I don’t give a *@#$. Man, we about to die. This is the last thing I wanted to happen to us, but it just did. Jacqueline, please wake up.”
1
0.422355
0.015613
Not even in Spain have I encountered a dining room as opulent yet breezy as this one, another success story from Fabio Trabocchi, best known for his Italian gifts to the city — but none more seductive than Del Mar. Consider the maritime name a prompt to try diced raw tuna on clear tomato jelly garnished with tiny sea beans. Or a crock of shrimp that arrives in a haze of garlic and chiles. Definitely slide a spoon into paella stained black with squid ink and decked out with wild calamari, smoky from the grill. Really, though, almost everything that exits the open kitchen deserves applause, be it house-baked bread slathered with striking-red crushed tomatoes, creamy golden fritters capped with stamps of Iberico ham or a wedge of tender potato omelet ringed with dots of saffron aioli. Trabocchi and his wife and business partner, Maria, populate the restaurant with some of the sharpest waiters in town, offer the most beautiful private rooms and tend to guests’ comfort with niceties such as pashmina shawls in cold weather. Few chefs enjoy the Midas touch of Fabio Trabocchi, whose see-and-be-seen Italian restaurants around Washington come with the advantage of terrific menus and top-flight service. The chef’s latest hit takes place on the Wharf and pays homage to the cooking of Spain, the origin of his equally savvy wife and business partner, Maria. Your first impression: What a sumptuous space! No matter where your eyes settle, there’s some fascinating detail to hold your gaze: the fish-shaped sculpture above your head, the hand-painted ceramic tiles beneath your feet or a server torching the bottom of a spoon of spreadable salami, to bring up its spicy flavor. Then the food starts flowing from the visible kitchen, which includes a dedicated paella stove, and your attention is fixed on such riches (and rich they are) as blushing lamb chops arranged with fried artichokes and creamy Manchego sauce. I knew that spring had truly sprung when I saw a classic potato omelet arranged with wild ramps and dabs of aioli, pale green with the season’s garlic. Every aspect of a meal puts the customer first, from the leather banquettes that support leisurely meals to leftovers that are retrieved from the host stand. Office mates might envy you your cuttlefish stew with sweet scallops and bright herbs the next day. Then again, they might also covet your having scored a reservation at one of the best restaurants, in one of the most exciting neighborhoods, in the entire region. One involves a variation on “awesome.” Whatever the exact word, it applies not just to the food, which is frequently luscious, but to the frisson in the dining room and the finesse of the staff. Sure to follow in any review, verbal or otherwise, is mention of cost. In typical Trabocchi fashion, the chef’s first deviation from Italian cooking is a pricey proposition. Even brunch can cost $100 a head (well, if the heads in question consume alcohol and order paella, one of Del Mar’s signature dishes). Let me be clear. Even if I wasn’t the beneficiary of someone’s generosity, or on an expense account, I’d save up for a meal here. In the three months since Del Mar has set sail, the restaurant has emerged as yet another example of why Washington is among the best cities in the nation for fine dining. Great ingredients and chic decor explain part of the story; a sense of commitment and a pride in doing everything just so make equally compelling impressions. Trabocchi does nothing halfway. Check out one of the lures at his raw bar. Three briny oysters from Prince Edward Island rest atop a fanciful, snail-shaped silver bowl filled with ice, garnished with seaweed and set on a gold place mat — the Mar-a-Lago of oyster presentations. The specimens are lovely on their own, but they pick up a smoky allure with the addition of some of the house-made hot sauce, coaxed from saffron and paprika, served on the side. The pleasure is fleeting, albeit first-class. As soon as the silver bowl is removed, a hot towel scented with fresh rosemary takes its place. The chorizo burger comes on a squid-ink-tinted bun. (Scott Suchman/For The Washington Post) Just as detailed is any order for charcuterie. As much as I love hand-cut slices of jamon Iberico, from pigs fed a diet of acorns, an ounce of the treat goes for $26. More affordable, and just as much of a kick, is two ounces of sobrasada for $14. A specialty of Mallorca, home to Trabocchi’s wife and business partner, Maria, the spreadable cured pork sausage is rolled out on a cart in a “bowl” of the sausage casing. The meat is the color of fire, the texture of pâté and shot through with smoked paprika. An attendant heats one of two spoons with a small blowtorch, so we can taste the difference between hot and room-temperature sobrasada. If you like the Italian ’nduja, you’ll appreciate its Spanish equivalent, especially as it’s offered here, with grilled bread drizzled with chestnut honey. Restraint, thy name ... isn’t mine. The menu is front-loaded with appetizers, hot and cold tapas that reveal some of the kitchen’s best work. Friends who text me for recommendations are encouraged to splurge on the foie gras torchon studded with membrillo (quince paste) and eaten on crisp bread with red onion jam, and the creamy chestnut soup ennobled with both lobster and a froth of sherry-laced “cappuccino.” Common-sounding tapas exit the kitchen tasting like a million bucks, which is another way of saying black truffle aioli (and black trumpet mushrooms) advance the cause of Del Mar’s refined take on the tortilla, a classic potato omelet. Del Mar — “the sea” in Spanish — makes a delicious case for octopus, cooked low and slow so that the skin takes on a pleasant gelatinous quality, then served on a zesty bed of crushed, olive oil-enriched potatoes. Diners who shy away from octopus because they’ve suffered through mushy or coarse flesh will be pleased to find neither here. Espelette pepper. Paprika. Much of the food at Del Mar relies on those and other quietly riveting seasonings. Consider the charcoal-kissed lamb chop, ringed with olive sauce and accompanied by a sweet pepper swollen with shredded braised lamb mixed with Manchego. Karla Ventura serves paella. (Scott Suchman/For The Washington Post) Don’t tell José Andrés, but Del Mar dishes up the choicest paella right now. Made on a dedicated stovetop with short-grain bomba rice, the dish, apportioned for two or more, comes in four flavors and stripes of aioli. Wild mushrooms and thick slices of blood sausage draw me most in winter; near-raw duck breast, a foul underscored with pockets of salt, marred the only paella I wouldn’t want to repeat. Restaurants of all stripes feel compelled to offer a burger on their lunch menus, and Del Mar is no different — except, of course, that it gives the American totem a decidedly Spanish spin, with a patty shaped from racy chorizo and fatty pork shoulder and with a slathering of aioli instead of mustard or ketchup. The brioche bun? It’s black, from squid ink, seemingly every chef’s paint of choice these days. The real charm of the construction are ringlets of perfectly fried squid between patty and bun that give the sandwich delightful lift. Then again, the spear of anchovy, green olive and pickled guindilla pepper that holds the mouthful together is fun, too, a nod to Spain’s beloved pintxos bars, where the bold threesome is known as a Gilda (pronounced heel-da). The contemporary main dining room is set off with an acrylic, fish-shaped chandelier, a sea of blue tiles and a fleet of servers who appear to be styled by GQ or Vogue, plus the bonus of an exhibition kitchen under the guidance of executive chef Alex Rosser, the former chef de cuisine of Fiola in Penn Quarter. But I’ve come to prefer the Old World-suggestive veranda, soothing in green, with wicker chairs and windows facing the wharf’s boats and passersby. The bar at Del Mar. (Scott Suchman/For The Washington Post) Wildly popular since Day 1, Del Mar, like Fiola Mare, the Trabocchis’ Italian seafood restaurant in Georgetown, is already a VIP magnet. The strain of success, the burden of popularity, reveal occasional lapses. On the night the president of the Republic of Kazakhstan and his entourage swarmed the private dining room on the second floor, I couldn’t help but think the group was being attended to at the expense of those of us below; some waited so long for our entrees, it felt like Ken Burns might have been producing dinner. And I couldn’t help but feel I was being taken advantage of the night I asked for a red to accompany the duck paella and a server returned with “a Rioja that tastes like a Burgundy” that was double the price of the white wine that had preceded it. (No, I didn’t give him a cutoff point, but the first wine should have given him some direction. Note to self: Talk numbers the next time wine is being discussed.) The infrequent misses aside, Del Mar is not merely the best restaurant on the Southwest Waterfront, it’s among the city’s finest restaurants to emerge all last year. Like space travel and time to read Ron Chernow’s “Grant,” a meal here is a luxury. Hope that someone else is picking up the check. Tom SietsemaTom Sietsema has been The Washington Post's food critic since 2000. He previously worked for the Microsoft Corp., where he launched sidewalk.com; the Seattle Post-Intelligencer; the San Francisco Chronicle; and the Milwaukee Journal. He has also written for Food & Wine. Follow
1
0.795998
0.111137
Directory of leisure activities Activity is intense along the coast, rivers and lakes. Catamarans, sailing dinghies, surfboard, surfing, canoeing, rowing ... The directory of boating and leisure activities provided by Boating in Brittany is here to help you. Look up the sailing club’s details and savour the pleasures of navigation.
1
0.158442
0.000253
package io.github.privacystreams.utils; import android.app.AlarmManager; import android.app.PendingIntent; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.content.IntentFilter; import android.os.SystemClock; /** * A scheduler based on alarm manager. */ public abstract class AlarmScheduler { private PendingIntent mAlarmIntent; private AlarmManager am; private BroadcastReceiver mReceiver; private Context ctx; public AlarmScheduler(Context ctx, String actionToken) { this.ctx = ctx; am = (AlarmManager) ctx.getSystemService(Context.ALARM_SERVICE); mAlarmIntent = PendingIntent.getBroadcast(ctx, 0, new Intent(actionToken), 0); mReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { run(); } }; ctx.registerReceiver(mReceiver, new IntentFilter(actionToken)); } protected abstract void run(); public final void schedule(long delayMillis) { am.set(AlarmManager.ELAPSED_REALTIME_WAKEUP, SystemClock.elapsedRealtime() + delayMillis, mAlarmIntent); } public final void destroy() { am.cancel(mAlarmIntent); ctx.unregisterReceiver(mReceiver); } }
1
0.783629
1.000002
Implications of radiotherapeutical techniques. Radiotherapy is implicated when reconstruction of the breast is to be made because sclerosis and skin atrophy with telangiectasies must be avoided. This can be achieved by the use of high energy radiation of 60Co and electrons of the linear accelerator, keeping in mind that the postoperative treatment of the disease needs the application of doses ranging from 4,000 to 5,000 rads delivered in 4 to 5 weeks.
2
1.868394
0.884196
// build-pass // compile-flags: --crate-type lib -C opt-level=0 // Regression test for LLVM crash affecting Emscripten targets pub fn foo() { (0..0).rev().next(); }
1
0.213976
0.995099
The present invention is directed generally to telephony devices, and more particular, to telephony devices having integrated messaging capabilities. As the telecommunications industry has grown, the number and different types of telephony devices has also dramatically increased. The use of telephony devices in mobile and cordless environments has also increased accordingly. There has also been an increased need and interest in providing reliable and easy to use peripheral devices such as answering machines, caller ID boxes, and the like. Digital answering machines have gained wide spread use by the telecommunications consumers. A typical digital answering machine is formed as a stand alone device which is coupled between a telephone and the subscriber line of the telephone in order to intercept and answer an incoming call under predefined conditions. The answering machine also provides the capability of storing messages from the calling party for later retrieval. Various approaches have been taken to integrate the functionality of an answering machine within a telephone. For example, the basic components of the digital answering machine have been incorporated into a telephone. Such systems typically include a digital voice memory for storing messages, including broadcast messages and received messages, and a digital signal processor (DSP) dedicated to answering machine functions such as compression of the messages, storage and retrieval. Another approach for integrating answering machine functions within an existing telephone is to provide an answering service remote from the telephone. This type of service routes unanswered calls to the remote answering service where messages are stored for access over the subscriber line. As the telecommunications industry continues to grow, there remains an interest in providing increased accessibility to the various telephony functions including answering functions. It is also desirable, however, to reduce the overall costs of the various telephony devices. Thus, there is generally a tension between a desire to provide added functionality while meeting the demands of lower costs. Generally, the present invention relates to communication devices having integrated messaging capabilities. In one particular embodiment, a communication device is provided which operates in a communication mode and a message mode. The communication device includes a speaker, a receiver provided to receive signals of a call received from a calling party and a memory arrangement for storing messages. A processor is coupled to the memory arrangement and is configured to code and decode signals in accordance with a cordless communication compression scheme used for cordless communication when in the communication mode. The processor is further configured to code signals received from the calling party, using the wireless communication compression scheme, for storage in the memory arrangement as a message when in the message mode. In accordance with another embodiment of the invention, a cordless telephone system having message recording capabilities is provided. The cordless phone includes a base station coupled to a switched telephone network. The base station includes a base station processing unit configured to receive signals from the switched telephone network and to code and decode the signals in accordance with a wireless transmission compression scheme. The base station further includes a transmitter/receiver coupled to the processing unit to transmit/receive coded signals. The cordless phone further includes a handset having a transmitter/receiver configured to transmit/receive coded signals for wireless communication with the base station, and a handset processing unit, coupled to the transmitter/receiver, configured to code and decode signals transmitted to and received from the base station. A memory arrangement is provided within the base station or the handset and is coupled respectively to either the base station processing unit or the handset processing unit. The memory arrangement is used to store messages which are coded by the respective one of the base station processing unit and the handset processing unit using the wireless transmission compression scheme. One embodiment of the invention provides messaging functions within a cordless phone system. In operation, the base station receives a call from a calling party. The cordless phone retrieves a broadcast message from a memory arrangement of the cordless phone in response to initiation of a message mode. The broadcast message is transmitted from the base station to the calling party. A message from the calling party is received at the base station and coded using a cordless transmission compression scheme used for cordless communication between the base station and the handset. The coded message from the calling party is stored in the memory arrangement of the cordless phone. In one particular embodiment, data transmitted between the base station and the handset are coded using adaptive differential pulse code modulation (ADPCM). In a further embodiment, the messages stored in the memory arrangement are also coded using ADPCM. The above summary of the present invention is not intended to describe each illustrated embodiment or every implementation of the present invention. The figures and the detailed description which follow more particularly exemplify these embodiments.
2
1.007928
0.679308
[A ruptured mycotic aneurysm of the femoral artery due to Salmonella typhimurium]. Mycotic aneurysms of the femoral artery is rare. We report a new case with a mycotic aneurysm of the femoral artery by "Salmonella typhimurium". The surgical operation was performed as surgical emergence for ruptured aneurysm. We did not know the aneurysm infection origin. The treatment of lesions was resection and femoro-femoral bypass with PTFE. The microbiological examination discovered infection material. A posterior bypass infection required a exeresis bypass and new revascularization with iliofemoral saphenous vein bypass by obturator foramen, and antibiotic treatment prolonged.
2
1.167504
0.988217
Police are appealing for people to name a man depicted in an efit image suspected of accosting a 13-year-old girl in Dunmow. The local girl was walking across the recreation ground in Church Street when she was grabbed on the wrist by a man. The girl punched the man to the chest and released his grip from her and ran away. She was not hurt. This happened at about 8am on May 21. The man is described as white, around 5ft 6ins to 5ft 8ins tall, of average build, with collar-length straight black hair, a black beard and moustache, big eyes, a flat-ended nose, wearing a black v-neck t-shirt, light green trousers with pockets on the side, black boots and was aged around 40. He smelt of cigarettes. Anyone who recognises the man pictured should contact Ds Pete McCormack at Braintree CID on 101 or Crimestoppers on 0800 555 111.
1
0.272961
0.100438
Medium Hoop Earrings This Mad Max style earrings is born to be bold, with a dash of badass. Made in an industrial age design, they are crafted out of sterling silver, brass, and 24 karat gold on brass. Secured with a hugging hoop (hidden into its other side) backing. Precision design hollowed-out in center for a lighter weight.
0
0.079389
0.000156
Q: get images from directory and ad to ul I'm using a responsive pattern from Brad Frost's library to create an image grid (beta version here: http://yogeshsimpson.com/fry). I want to have three folders of images for the three portfolio categories that my client can just drop images into. From there I think php is the right tool to get all the images from a folder, wrap in li and a tags and have them added to the ul. So on the homepage you see images from all three directories, and on the "lighting" page for example, you see only images from that directory, etc. Again I'm assuming this is fairly easy to do with php, but it's a bit beyond my grasp. Any help would be appreciated. Much thanks. A: Something like this will work, run it from within the folder you wish to list the files. <?php if ($handle = opendir('.')) { while (false !== ($file = readdir($handle))) { if ($file != "." && $file != "..") { $thelist .= '<li><a href="'.$file.'">'.$file.'</a>' . "<br>"; } } closedir($handle); } ?> <P>List of files:</p> <P><?=$thelist?></p> Or: <?php // if running from your root and wish to list files from 'images' sub-folder // use $dir = 'images'; $dir = '.'; // The directory containing the files. // *.* will list all files. You can use *.jpg to list just JPG files, etc. $ext = '*.*'; // will list ALL files ?> <ul> <?php foreach (glob($dir . '/*' . $ext) as $file) { ?> <li><a href="<?php echo $dir . "/" . $file ?>"><?php echo basename($file); ?></a></li> <?php } ?> </ul>
1
1.07517
0.90126
PARIS (Reuters) - German Peter Gojowczyk was fined 25,000 euros ($29,210.00) on Thursday after he retired from his French Open first-round match against Britain’s Cameron Norrie. The International Tennis Federation (ITF) introduced new measures to stop players turning up injured or ill, only to retire in the first round and yet still pick up a lucrative first-round loser’s cheque — 40,000 euros at Roland Garros. Gojowczyk, who played the final in Geneva last weekend and practised at Roland Garros on Sunday, retired from his match in Paris citing hip pain as he was trailing 6-1 2-0. Another German, Mischa Zverev, was handed a similar fine at the Australian Open earlier this year. Top seed Rafa Nadal, the 10-times French Open champion, said: “I think it’s a good rule, because there is a lot of money on the slams. For a lot of players, (the fact) that they are in... a Grand Slam, and have a physical problem in that week, just playing the tournament helps a lot to save the year.”
1
0.658518
0.332741
Q: jQuery - If previous element doesnt have specific class then remove that element I wrote this code, but it doesnt work. goal: If span that is previous to ".well-well" is NOT ".dont-remove" then remove that span However if span that is previous to ".well-well" IS ".dont-remove" then do nothing "example" here: http://jsfiddle.net/sAebR/ if( $(".well-well").prev('span').not('.dont-remove') ){ $(".well-well").prev('span').remove(); } What im getting with this code is that it removes all spans that are previous to ".well-well" and i have no idea why. What am i doing wrong? A: you don't need the if: $(".well-well").prev('span').not('.dont-remove').remove() what your code is say is: if ($(".well-well").prev('span').not('.dont-remove') != null) { //remove them all } don't worry about the if. The remove() function will look after that. http://jsfiddle.net/sAebR/2/
1
0.790368
0.262579
import { assert } from "@thi.ng/api"; import { isString } from "@thi.ng/checks"; import { illegalArgs } from "@thi.ng/errors"; import type { Lit, Sym, Term } from "../api/nodes"; import type { SymOpts } from "../api/syms"; import type { ArrayTypeMap, Type } from "../api/types"; import { gensym } from "./idgen"; export function sym<T extends Type>(init: Term<T>): Sym<T>; export function sym<T extends Type>(type: T): Sym<T>; export function sym<T extends Type>(type: T, opts: SymOpts): Sym<T>; export function sym<T extends Type>(type: T, init: Term<T>): Sym<T>; export function sym<T extends Type>(type: T, id: string): Sym<T>; // prettier-ignore export function sym<T extends Type>(type: T, id: string, opts: SymOpts): Sym<T>; // prettier-ignore export function sym<T extends Type>(type: T, opts: SymOpts, init: Term<T>): Sym<T>; // prettier-ignore export function sym<T extends Type>(type: T, id: string, opts: SymOpts, init: Term<T>): Sym<T>; export function sym<T extends Type>(type: any, ...xs: any[]): Sym<any> { let id: string; let opts: SymOpts; let init: Term<T>; switch (xs.length) { case 0: if (!isString(type)) { init = type; type = init.type; } break; case 1: if (isString(xs[0])) { id = xs[0]; } else if (xs[0].tag) { init = xs[0]; } else { opts = xs[0]; } break; case 2: if (isString(xs[0])) { [id, opts] = xs; } else { [opts, init] = xs; } break; case 3: [id, opts, init] = xs; break; default: illegalArgs(); } return { tag: "sym", type, id: id! || gensym(), opts: opts! || {}, init: init!, }; } export const constSym = <T extends Type>( type: T, id?: string, opts?: SymOpts, init?: Term<T> ) => sym(type, id || gensym(), { const: true, ...opts }, init!); /** * Defines a new symbol with optional initial array values. * * Important: Array initializers are UNSUPPORTED in GLSL ES v1 (WebGL), * any code using such initializers will only work under WebGL2 or other * targets. */ export const arraySym = <T extends keyof ArrayTypeMap>( type: T, id?: string, opts: SymOpts = {}, init?: (Lit<T> | Sym<T>)[] ): Sym<ArrayTypeMap[T]> => { if (init && opts.num == null) { opts.num = init.length; } assert(opts.num != null, "missing array length"); init && assert( opts.num === init.length, `expected ${opts.num} items in array, but got ${init.length}` ); const atype = <Type>(type + "[]"); return <any>{ tag: "sym", type: atype, id: id || gensym(), opts, init: init ? { tag: "array_init", type: atype, init, } : undefined, }; }; export const input = <T extends Type>(type: T, id: string, opts?: SymOpts) => sym(type, id, { q: "in", type: "in", ...opts }); export const output = <T extends Type>(type: T, id: string, opts?: SymOpts) => sym(type, id, { q: "out", type: "out", ...opts }); export const uniform = <T extends Type>(type: T, id: string, opts?: SymOpts) => sym(type, id, { q: "in", type: "uni", ...opts });
1
0.914133
0.943106
Date: {{.Date}} From: MAILER-DAEMON@{{.Me}} To: {{.RcptTo}} Subject: failure notice Hi. This is the tmail deliverd program at {{.Me}} I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <{{.RcptTo}}>: {{.ErrMsg}} --- Below this line is a copy of the message. {{.BouncedMail}}
1
0.036075
0.013886
AURORA — More than 20 years after the first segment of light rail opened in metro Denver, the Regional Transportation District is poised to launch its latest train line through the heart of Aurora with a critical new connection for transit riders headed to the airport. “It is a big day for Aurora,” said RTD general manager Dave Genova, standing aboard the new R-Line light-rail train as it rolled alongside Interstate 225 on Friday morning. “I think it’s going to open up a lot more commuting opportunities for people in the southeast portion of the metro area.” Genova joined reporters on a media ride of the new train, which is part of metro Denver’s still-growing 118-mile FasTracks transit network. The 22-mile R-Line, which runs from Lincoln Station in Lone Tree to Peoria Station in Aurora, opens to the public Feb. 24. But the real work was the construction of 10.5 miles of new track — totaling $687 million — that connects Nine Mile Station with Peoria, a stubborn gap in Denver’s transit system that kept the state’s third-largest city from fully linking to its neighbors. It also allows Aurora commuters to ride the new line to Peoria and easily transfer to the University of Colorado A-Line, which provides access to downtown Denver and to Denver International Airport. According to the RTD schedule for the R-Line, a trip from end to end should take just under an hour. “Connecting a city center with light rail opens up economic development and transit-oriented development that without that connection you don’t have,” Genova said. Aurora Mayor Steve Hogan said his city of 350,000 has been preparing for this day a long time, with various projects already sprouting up around the eight new light-rail stations that will go live in a week. That includes the burgeoning Anschutz medical complex at the north end of the line, the new Veterans Administration Hospital, a hotel at the 2nd Avenue stop, and new developments around Aurora Metro Center Station. “There are good things happening at several stations already and we would expect more,” the mayor said. “We look at northeast Aurora as the next 50 years of our city. Developers can open a map and say, ‘Aha!’ It’s just another tool that developers have to help us as a jurisdiction get quality development that brings us housing, retail and jobs.” The R-Line, which serves 16 stations and is expected to have daily ridership of 12,000 one year after it opens, jumped from concept to reality nearly five years ago, when RTD picked Kiewit Infrastructure Co. to complete the unbuilt portion of the line through Aurora. At the time, Kiewit said it expected to finish the line in November 2015 at a cost of $350 million. RTD spokesman Scott Reed said Kiewit’s costs were in line with its estimates, but with right-of-way acquisitions, environmental studies, the purchase of 19 light-rail vehicles and insurance, the total project cost was closer to $700 million. The project’s timeline was thrown off a bit in 2013 when CU president Bruce Benson asked RTD to move the new line’s alignment off East Montview Boulevard to protect sensitive equipment in medical and research facilities on the 578-acre Fitzsimons campus from vibrations and electromagnetic interference generated by the trains as they pass University Hospital, Children’s Hospital Colorado and the VA Hospital. Joe Amon, The Denver Post The R-Line train, RTD's latest rail line to open pulls into Peoria Station Feb. 17, 2017 in Aurora. Joe Amon, The Denver Post The R-Line, RTD's latest rail line to open at Peoria Station Feb. 17, 2017 in Aurora. Joe Amon, The Denver Post The R-Line, RTD's latest rail line to open at Peoria Station Feb. 17, 2017 in Aurora. Joe Amon, The Denver Post The R-Line train, RTD's latest rail line to open at Colfax Station Feb. 17, 2017 in Aurora. Joe Amon, The Denver Post The R-Line train at Peoria Station Feb. 17, 2017 in Aurora. RTD turned 50 on Monday. Joe Amon, The Denver Post The R-Line train , RTD's latest rail line to open pulling into the Florida station Feb. 17, 2017 in Aurora. Joe Amon, The Denver Post Dave Genova, RTD General Manager and CEO talks during a media ride on the R-Line, RTD's latest rail line to open, Feb. 17, 2017 in Aurora. Joe Amon, The Denver Post The R-Line train, RTD's latest rail line to open pulls into Peoria Station Feb. 17, 2017 in Aurora. Joe Amon, The Denver Post Train operator Thomas Houser in the drivers seat of the R-Line train, RTD's latest rail line to open at the Peoria Station Feb. 17, 2017 in Aurora. Genova said another big design challenge with the R-Line was running it through Aurora’s city center, complete with sharp curves, slower speeds and at-grade crossings. “I think we have every kind of safety device you can put on a crossing,” said Genova, who served as RTD’s safety chief before taking charge of the agency last year. But that jog through the heart of Aurora is a critical component of the new light-rail line, according to RTD director Bob Broom, who once served as a councilman for the city. He represents the area encompassing the northern half of the R-Line. “I think it’s going to rejuvenate that area that has laid fallow in the middle of the city for years,” Broom said. “I call this new line Aurora’s new Main Street.” For metro-area commuters who are accustomed to traveling by light rail, the R-Line will sound and feel familiar. It will provide more than 1,200 new parking spaces on its freshly constructed section through Aurora. But three of the new stations — Colfax, Fitzsimons and Florida — do not have parking. The Fitzsimons Station will provide free shuttle buses for workers to access the sprawling medical campus. RTD parking regulations — first 24 hours free for those who live in the RTD district — apply at all the new stations except for Iliff, which will charge $3 per day at the new city-owned, 600-space garage. For those using the R-Line and A-Line to access DIA, they can purchase a regional day pass for $9 at any of the stops on the R-Line and transfer to the A-Line at Peoria for free. RTD will also expand the H-Line two stops to the Florida Station starting Feb. 24 to give more Aurora residents another option to get to downtown Denver. The R-Line is the first light-rail line to open since the W-Line went operational between Golden and Denver Union Station in 2013. Two new commuter rail lines — the A-Line and the B-Line, connecting Denver to Westminster — opened in 2016. With the R-Line’s official opening next week, FasTracks will have about 75 miles of rail in operation. The N-Line, a commuter rail line serving residents of Commerce City, Northglenn and Thornton, is scheduled to open in 2018, while an extension of the southeast line deeper into Lone Tree will open the following year. The opening of the G-Line serving Wheat Ridge and Arvada is on an indefinite delay as RTD officials and its contractor, Denver Transit Partners, try to fix a software glitch with crossing-arm technology that impacts both the G-Line and A-Line. The R-Line experienced its own delay late last year when RTD said it wouldn’t meet its December opening deadline. But Hogan, Aurora’s mayor, said he would rather have a slight hiccup in the introduction of new rail service than have a line that opens before all issues are worked out. “I’d rather have a line that opens two months late that works than one that opens on time but still has problems,” he said.
1
0.733109
0.280016
Friday, June 6, 2014 From Memphis TN-USA There's not a day, hour or second that goes by when I don't think of you. The devastation of that day is something I'll never out live. Michael, I have so much love and respect for you and your contribution to this world. Your unselfish acts of love and humanitarianism will never be forgotten. I first love your heart because it's quite special. You are my inspiration and motivation. Your spirit of love has taught me to love better. Your acts of humanitarianism has taught me how to give more. I miss you are words that I never thought I would have to say about you. When those days become darker your voice enlightens my spirit and my heart. I will continue to honor your legacy each day I live. I will continue to be a giving person, as you've taught me throughout your works. I will continue to be a faithful fan who diligently teaches others about you. Michael, you are greatly loved and missed. I LOVE YOU! I hope that somehow and someway you know this. I love you always.
1
0.976062
0.009632
The next recession will be wrenching for some workers. It will intensify the adoption of new technology that makes some jobs obsolete. It will force many workers to become digitally proficient, or leave the labor force for good. It will also present opportunities for those ready to grab them. “When employers cut payrolls, they have a chance to reoptimize and rethink their whole workforce strategy,” says Andrew Chamberlain, chief economist at job site Glassdoor. “You’re likely to see big changes.” Recessions might seem like tidal-wave events that swamp everybody in their path. But there are a surprising number of things workers at all levels can do to raise the odds of staying employed and even getting ahead during a recession. And the time to do that isn’t when the recession hits—it’s before, when money and other resources are more freely available. If there’s any good news about the next downturn, it’s that it probably won’t be as severe as the last one, which ran from December 2007 through May 2009, featured twin housing and financial crashes, and sent the unemployment rate soaring to 10%. “The next recession will be a garden-variety downturn,” Ryan Sweet, director of real-time economics for Moody’s Analytics, told Yahoo Finance recently. “I don’t think the next recession is going to be a financial crisis.” That doesn’t mean it will be pleasant, however. Even now, with the unemployment rate at 3.7%, many workers struggle with lagging pay and outdated skills. In a recession unemployment could rise rapidly to around 7.5%, the historical norm for a downturn. That would push the number of unemployed from 6.1 million now to more than 12 million, with others getting their hours cut or giving up on work and dropping out of the labor force. What should you be doing to prepare? Here are seven pitfalls labor-market experts foresee, with tips for how to avoid them: A few industries could be devastated. Retail seems particularly vulnerable, since many chains have too many outlets and too much debt to compete effectively with behemoth Amazon and other online merchants. Other industries where jobs are endangered include transportation, logistics, warehousing, food service and hospitality, as many jobs can be transitioned to robots and other types of automation. Even finance jobs could disappear if they involve predictable trading that could be done by software. Safer industries include health care, technology and government. And construction, which was battered during the housing bust, might hold up, since there’s a shortage of residential housing in some areas. What to do now: If you work in a vulnerable industry, look for pathways into a safer one. “If you’re a retail worker, there are a number of paths you can take out of retail, and moving out of retail lowers your automation risk,” says Matt Sigelman, CEO of labor-market research firm Burning Glass Technologies. One example: If you work at a retailer that sells electronics, learn how to do installation and repair work, like Best Buy’s Geek Squad does. Then apply those skills to landing a help-desk or tech-support job in a different industry. “Those are golden gateway jobs that open up part of the technology spectrum,” Sigelman says. “And they don’t even require a college degree.” Display of Hewlett-Packard laptop computers in a Best Buy store in Pittsburgh. (AP Photo/Gene J. Puskar) Employers will demand more. This is now an established trend: When recessions hit and the labor-supply grows, many employers raise the requirements for a given job. So if a training certificate once got you in the door, it might now take an associate’s or bachelor’s degree. And if it used to require a bachelor’s degree, it might now require a master’s. This is one reason many companies are complaining now about a shortage of skilled workers: After the last recession, they grew accustomed to a large pool of overqualified applicants and abandoned their own training programs. What to do now: Get as much training as you can. That doesn’t mean taking out $50,000 in loans just to add another degree to your resume. Instead, cherry-pick cost-effective training programs you know employers value, especially if they might help you get promoted or qualify for a better job now. “We’re lucky to live in an era when education is moving in the Spotify direction,” says Sigelman. “You don’t need a whole degree. You can go to Coursera or EdEx and buy a course.” If you can do it on your company’s dime, even better. It’s also a good time to ask for more responsibility and any additional training your company can provide. Everybody will become a hybrid worker. Tech companies increasingly hire non-tech experts in marketing, sales, management and business support. And non-tech firms now need software engineers, data analysts, database programmers and all manner of technology specialists. When the next recession hits and employers must decide who stays and who goes, the workers with crossover skills in various fields will be the survivors. What to do now: Get out of your bubble, and more than anything, get technology training. “Digital skills are going to be part of 80% to 90% of jobs,” says Jane Oates, president of the nonprofit group Working Nation. “This is a time to do any self-improvement you can possibly do.” Robots could finally make their move. For all the talk of robots, they haven’t displaced many jobs yet. But that could change as employers jump on the chance to experiment with new systems based on virtual reality and artificial intelligence that are currently being developed in labs. “We can expect to see workers in faraway countries operating robots remotely in the service economy,” says Louis Hyman, director of the Institute for Workplace Studies at Cornell University. “They could do all kinds of stuff remotely. Pick crops. Serve drinks. Fold towels. Mop floors.” The robot revolution might be overhyped, and robots might start by doing the least desirable jobs when workers are hard to find. But the technology is advancing rapidly, putting more and more jobs at risk. What to do now: If your job involves repetitive tasks that don’t vary much, it’s a candidate for automation—and that includes white-collar work as well as blue-collar. Develop new skills that allow you to be more productive, especially if it means working with robots and advanced machines. Manufacturing workers who can operate CNC machines or CAD/CAM tools are in much higher demand than those who can simply assemble things. White-collar workers who can work with databases, develop strategy, and close deals are more valuable than those who simply compile reports month after month. It might also pay to explore remote work and become a robot operator. Flexibility will be crucial. One problem in the economy today is low labor mobility: some workers in depressed areas can’t or won’t move to where there are more jobs, consigning themselves to ever-falling living standards. Employers, meanwhile, are setting up shop in places where they can get the skilled workers they need, while abandoning economic backwaters. The economy will probably become even more bifurcated during the next recession, as employers consolidate in coastal cities, university towns, tech hubs and other areas that can supply needed workers. What to do now: Stay nimble. “Save money,” says Jane Oates. “Put off large purchases. Avoid carrying debt, if possible. The way to make yourself recession-proof is to get yourself as many options as possible.” Buying a home, for instance, is still a good way to build wealth—as long as it’s in a market with a healthy economy, and you’re relatively sure you’ll be in the house for at least five years. But committing to a mortgage can make it impossible to move if home values fall and you can’t afford to sell at a loss. The government will do less to help. After the 2008 financial crash, Washington provided trillions of dollars in monetary and fiscal stimulus, which probably prevented a recession from becoming a full-blown depression. The government won’t be as generous next time around. The Federal Reserve, which typically slashes interest rates by about 5 percentage points during a recession, to stimulate lending, has begun cutting rates from a ceiling of just 2.5%. And with the national debt soaring by nearly $1 trillion per year, Washington may not have the wallet to cut taxes (again), fund stimulus projects like road and bridge construction, enhance unemployment benefits and do other things typical in a recession. What to do now: Become self-sufficient and develop backup plans. Families may need to rely more on each other if the safety net frays. If you have health insurance through an employer, get needed checkups or other procedures now, since you may no longer have insurance if a recession hits and you lose your job. Another jobless recovery will follow the recession. Since 1990, we’ve had three recessions, and each has been followed by a “jobless recovery,” with employers very slow to staff back up and some jobs disappearing for good. “There’s no reason to think it will be different the next time,” says Hyman of Cornell. That’s because the economy has downshifted into a trend of slower growth that doesn’t require companies to hire rapidly after a recession. Instead, they can hire selectively and assess new technologies that might augment or replace workers. What to do now: Take the long view, and prepare for future jobs that might be considerably different than the one you have now. “Human capital depreciates around 1%-2% per year,” says Sigelman of Burning Glass. “So you should be investing 1%-2% of your time to replace what is depreciating. You should always be learning a new skill while staying on top of whatever field you’re in.” If that sounds like a lot of work, consider it the price of surviving the next recession.
2
1.204848
0.232506
Q: Starshapeness of polynomial tracts with respect to the (entire collection of) critical points contained in the tract I recently found out (Piranian, "The Shape of Level Curves") that a polynomial tract (ie a connected component of a set of the form $G=\{z:|p(z)|<\epsilon\}$ for some $\epsilon>0$) need not be starshaped with respect to the zeros of $p$ contained in $G$. This disappointed me bitterly, as that starshapeness was a pivotal step in a proposed "proof" I had of Smale's mean value conjecture. The places where $G$ is not starshaped with respect to the zeros of $p$ in $G$ are near critical points of $p$ in $G$ or in $\partial G$, so I still hold out a tiny bit of hope for the starshapeness of $G$ with respect to the critical points of $p$ contained in $G$: Conjecture: If $G$ is a tract of $p$ with smooth boundary containing more than one distinct zero of $p$, then $G$ is starshaped with respect to the critical points of $p$ contained in $G$. Intuitions/proofs/disproofs/references are all welcome. EDIT: Note that when I say that $G$ should be "starshaped with respect to the critical points", I mean that each point in $G$ can be seen by some one of the critical points of $p$ in $G$, not of course that some single critical point can see all points in $G$. Note also that I added the assumption that $G$ contains more than one distinct zero of $p$ (since otherwise $G$ will not contain any critical points of $p$. One reason I think this is plausible: If we consider the lemniscate of Bernoulli, and let $G$ be the interior of a level curve of $p$ which is a bit bigger, the critical point of $p$ is right in the center, so should be able to "see" both lobes. In the counter-example of Piranian to my desired conjecture (that tracts are star-shaped with respect to the zeros they contain), the points that killed the starshapeness were close to the boundary of $G$, so perhaps if we assume $\partial G$ is smooth, $G$ will contain enough critical points to see into all "corners". A: There is no hope: according to a theorem of Hilbert, every analytic Jordan curve $J$ can be approximated by a lemniscate $\{z:|P(z)|=\epsilon\}$. So the set does not have to be starlike with respect to any point. For this theorem of Hilbert, see, for example J. L. Walsh, ``Interpolation and Approximation by Rational Functions in the Complex Plane,'' 5th ed., Amer. Math. Society, Providence, RI, 1969.
2
1.069096
0.998148
import PropTypes from 'prop-types'; import { partition, hierarchy } from 'd3-hierarchy'; import { flattenHierarchy } from '@potion/util'; import Layout from './Layout'; export default class Pack extends Layout { static displayName = 'Partition'; static propTypes = { separation: PropTypes.number, size: PropTypes.arrayOf(PropTypes.number), round: PropTypes.number, data: PropTypes.object.isRequired, includeRoot: PropTypes.bool, sum: PropTypes.func, }; static defaultProps = { ...Layout.defaultProps, includeRoot: true, sum: d => d.value, }; getSchema() { return { layout: partition, layoutProps: ['round', 'size', 'separation'], selectStylesToTween: d => ({ x0: d.x0, y0: d.y0, x1: d.x1, y1: d.y1, }), }; } getData() { const { data, sum, includeRoot } = this.props; return flattenHierarchy( this.getLayout()( hierarchy(data).sum(sum) ) ) .slice(includeRoot ? 0 : 1); } }
1
0.939609
1.00001
function [im, scale] = readImage(imagePath) % READIMAGE Read and standardize image % [IM, SCALE] = READIMAGE(IMAGEPATH) reads the specified image file, % converts the result to SINGLE class, and rescales the image % to have a maximum height of 480 pixels, returing the corresponding % scaling factor SCALE. % % READIMAGE(IM) where IM is already an image applies only the % standardization to it. % Author: Andrea Vedaldi % Copyright (C) 2013 Andrea Vedaldi % All rights reserved. % % This file is part of the VLFeat library and is made available under % the terms of the BSD license (see the COPYING file). if ischar(imagePath) try im = imread(imagePath) ; catch error('Corrupted image %s', imagePath) ; end else im = imagePath ; end im = im2single(im) ; scale = 1 ; if (size(im,1) > 480) scale = 480 / size(im,1) ; im = imresize(im, scale) ; im = min(max(im,0),1) ; end
3
0.756145
0.285012
Glutathione localization by a novel o-phthalaldehyde histofluorescence method. Glutathione in tissues forms an intense fluorophore with a solution of o-phthalaldehyde at room temperature. We have studied the loss of glutathione from tissue sections and find that it is not measurable from thick sections. The fluorescence spectra of the induced fluorophore between glutathione and o-phthalaldehyde are identical in model and tissue sections, while depletion of hepatic glutathione by diethyl maleate produces a comparable fall in fluorescence measured biochemically or histochemically. This simple method is specific as interfering substances, such as spermine and spermidine, produce very weak fluorescence under the conditions employed.
2
1.810896
0.945717
Sudo for Windows (sudowin) allows authorized users to launch processes with elevated privileges using their own passphrase. Unlike the runas command, Sudo for Windows preserves the user's profile and ownership of created objects. SubtitleCreator allows you to create new subtitles for your DVD's. It has a Wizard interface, advanced synchronization features, DVD preview, and a simple WYSIWYG editor. The latest version even has support for Karaoke. SimMetrics is a Similarity Metric Library, e.g. from edit distance's (Levenshtein, Gotoh, Jaro etc) to other metrics, (e.g Soundex, Chapman). Work provided by UK Sheffield University funded by (AKT) an IRC sponsored by EPSRC, grant number GR/N15764/01. This is analog for NCover application, but have some advantages The project is not in work actually, so you may try the following projects instead: https://github.com/sawilde/partcover.net4 - original fork https://github.com/sawilde/opencover - another cover from blessed man who was able to keep PartCover live. Regarding license: all sources (here at SF) are open. You are free to copy/modify/distribute without any confirm from my side. I cannot garantee the same for files in other locations. Best regards! The FileHelpers are an easy to use .NET library written in C#. Is designed to read/write data from flat files with fixed length or delimited records (CSV). Also has support to import/export data from different data storages (Excel, Access, SqlServer) Code on GitHub: https://github.com/MarcosMeli/FileHelpers Bugs/Ideas: http://filehelpers.myjetbrains.com/youtrack/rest/agile/FH/sprint
1
1.294512
0.524827
890 F.2d 388 51 Fair Empl.Prac.Cas. 962,53 Fair Empl.Prac.Cas. 304,52 Empl. Prac. Dec. P 39,504,52 Empl. Prac. Dec. P 39,728Guydell HORLOCK, Plaintiff-Appellee,v.GEORGIA DEPARTMENT OF HUMAN RESOURCES, et al., Defendants-Appellants. No. 88-8611. United States Court of Appeals,Eleventh Circuit. Dec. 13, 1989.Order on Grant of Rehearing En Banc Jan. 31, 1990. Annette M. Cowart, William F. Amideo, Atlanta, Ga., for defendants-appellants. A. Lee Parks, Jr., Theresa L. Kitay, Meals, Kirwan, Goger, Winter & Parks, Atlanta, Ga., for plaintiff-appellee. Appeal from the United States District Court for the Northern District of Georgia. Before COX, Circuit Judge, HILL*, and SNEED**, Senior Circuit Judges. HILL, Senior Circuit Judge: I. INTRODUCTION 1 Richard A. Fields and Sandra Watson, the individual defendants/appellants involved in the above-styled case, appeal the district court's denial of their motion for summary judgement on the basis of qualified immunity. As such, we review the district court's decision on the basis of the facts viewed in the light most favorable to the party against whom summary judgment was sought, the plaintiff-appellee in this case. 2 Since we find that no issue of material fact exists that would preclude appellants' entitlement to qualified immunity on the claim for a deprivation of property without due process, we conclude that appellants should have been granted summary judgement on this claim. A. Facts 3 Plaintiff/appellee, Guydell Horlock, is a fifty-four year old white female. Ms. Horlock has been employed by the Georgia Department of Human Resources at Georgia Regional Hospital--Atlanta ("GRHA") since 1971. Defendant/appellant Richard A. Fields, M.D., a thirty-seven year old black male, is the superintendent of GRHA. Ms. Horlock worked directly under Dr. Fields as an administrative secretary beginning in November, 1982. 4 In December, 1985, Fields hired defendant/appellant, Sandra Watson, a thirty-year old black female, on an emergency appointment at GRHA as an "Activity Therapist." Ms. Watson instead acted as a consultant to Dr. Fields in the contemplated managerial reorganization of the superintendent's office. 5 Ms. Horlock alleges that in February 1986, at about the time that Watson's ninety-day emergency appointment was to expire, Dr. Fields informed Horlock that she would "no longer be needed" in his office. Horlock was transferred to an administrative secretary's position in the Planning and Development Section of GRHA. Ms. Horlock makes no claim that her new position, which she currently holds, carried with it a loss of salary or demotion of any kind. 6 In February, 1986, at the time that Horlock was being transferred, Dr. Fields proposed to create and fill a purportedly new position in his office for an administrative "assistant." Ms. Horlock alleges and regards as crucial to her case the fact that the administrative "assistant" position is fundamentally the same as that of her former administrative "secretary" position, with some of the more clerical tasks replaced by a duty to represent the superintendent at specified functions. Horlock maintains that Dr. Fields created this "new in name only" position for Ms. Watson because Watson is a young black female. Horlock contends that she was removed from her former position to make way for Watson. 7 Due to evidence of non-compliance by Dr. Fields with Georgia State Merit System procedures for staffing a new position, his first attempt to fill the administrative assistant position with his alleged pre-selected candidate, Watson, was nullified; the position was thereafter posted for interested applicants. On his second attempt to fill the post, Fields added the requirement of a master's degree and/or six years of hospital administration experience as prerequisites to applying for the job. Watson and Horlock were the only candidates who applied. 8 Watson outscored Horlock on a written test and an oral interview before a committee of three hospital employees. These tests were designed by Dr. Fields; Horlock disputes the validity of the procedure. Horlock alleges that she received a rejection letter from Dr. Fields on the same day she was interviewed, and that evidence demonstrates that Fields rejected Horlock even before he consulted with the screening committee. Watson received the permanent appointment as administrative assistant in June, 1986. 9 On July 8, 1986, Ms. Horlock initiated an internal complaint of discrimination regarding the selection process. On July 15, 1986, that complaint ripened into a charge filed with the Georgia Office of Fair Employment Practices. 10 On July 10, 1986, Ms. Horlock's employment supervision was transferred by Dr. Fields from himself to Ms. Watson. Watson later issued Horlock a Report of Performance with the lowest score Horlock had received in her fifteen years as a state employee. Watson cited Horlock's attitude problems, including her hostile and resistant demeanor. 11 Horlock then filed her second administrative charge, alleging retaliation. In early August, 1986, the Georgia Office of Fair Employment Practices determined that there was just cause for finding retaliation by Fields and Watson. B. Procedural History 12 On April 14, 1987, Horlock filed this action in federal court in the Northern District of Georgia and asserted causes of action under Title VII; the Age Discrimination in Employment Act ("ADEA"); 42 U.S.C. Secs. 1981 and 1983, and the Fourteenth Amendment. Horlock alleged unlawful discrimination in employment on the basis of race and age, claimed that the defendants retaliated against her after she filed charges of discrimination, and asserted that defendants denied her due process by depriving her of a protected property interest in a wholly arbitrary and capricious manner. She requested various forms of equitable and legal relief, including awards of compensatory and punitive damages. 13 In November, 1987, all defendants moved for summary judgment, arguing that Horlock failed to state a claim upon which relief could be granted as to each of the asserted causes of action; and that, even if plaintiff had alleged sufficient facts to state any of the claims, defendants Fields and Watson were entitled to qualified immunity from damages because defendants' actions allegedly did not violate any "clearly established" right of plaintiff, as required in order to avoid the qualified immunity of public officials to such suits. 14 On February 29, 1988, a magistrate issued his report and recommendation that defendants' motion be denied in all respects. Defendants objected to the magistrate's report in only two respects. First, they asserted that the magistrate erred in denying them summary judgment on Horlock's due process claim under section 1983 because plaintiff failed to show the existence of a property interest in the position of administrative assistant. The defendants also challenged the magistrate's conclusion that Fields and Watson are not entitled to qualified immunity. 15 The district court considered the defendants' objections and on July 1, 1988, denied defendants' motion for summary judgment and adopted the magistrate's report and recommendation on both issues. Concerning defendants' first objection, the court agreed that Horlock needed to have a property right in the administrative assistant position in order to allege an unconstitutional deprivation of due process. The court declared that summary judgment was inappropriate, however, because a material factual issue existed regarding whether the administrative "assistant" position was a new position or differed only in name from Horlock's former administrative "secretary" position. 16 The district court's order intimated that Horlock had a property interest "in a job she had held for over six years," and stated that "deprivation of a property interest for an improper motive and by pretextual means is a substantive due process violation." Regarding the qualified immunity issue, the court cited the magistrate's report with approval: 17 Plaintiff alleges defendants denied her right to be free from discrimination in employment on the basis of race and age, the right to protest unlawful employment practices of the defendants, and the right not to be deprived of a protected property interest without due process of law. These are fundamental rights which have been established through the Title VII, ADEA, and the Civil Rights statutes which defendants should have been aware of. None of these statutes are of recent enactment. 18 The order, insofar as it denied summary judgment on the issue of whether Horlock was deprived of a property interest as required to state a section 1983 due process claim, was not a final judgment of the district court directly appealable to this court. The district court amended its order so that defendants could apply for leave to appeal from the interlocutory order. On November 30, 1988, this court denied defendants' petition for discretionary appeal under 28 U.S.C. Sec. 1292. The individual defendants now appeal from the denial of summary judgement on the ground of qualified immunity. II. DISCUSSION 19 We begin by reviewing the procedural and jurisdictional context of this case. Appellants contend that the district court erred in refusing to grant their motion for summary judgment based upon qualified immunity of Fields and Watson to the section 1983 due process claim. At issue is a small slice of the case at large. Only the individual defendants, Fields and Watson, are involved: qualified immunity does not apply to the Georgia Department of Human Resources and its agency, the Georgia Regional Hospital-Atlanta. Whether appellee stated causes of action under the ADEA, Title VII, section 1981, and section 1983 is not at issue. Only the qualified immunity issue as it relates to appellee's due process claim brought under section 1983 is before us. 20 The district court's denial of summary judgment to Dr. Fields and Ms. Watson on the basis of their qualified immunity from the Sec. 1983 due process claim is a final order of the district court and thus appealable as of right to this court under 28 U.S.C. Sec. 1291. 21 A. Appellate Jurisdiction Over Qualified Immunity Claims. 22 The qualified immunity determination " '... falls within that "small class [of decisions] which finally determine claims of right separable from, and collateral to, rights asserted in the action, too important to be denied review and too independent of the cause itself to require that appellate consideration be deferred until the whole case is adjudicated." ' " Rich v. Dollar, 841 F.2d 1558, 1560 (11th Cir.1988), quoting Mitchell v. Forsyth, 472 U.S. 511, 524-25, 105 S.Ct. 2806, 2814-15, 86 L.Ed.2d 411 (1985), in turn quoting Cohen v. Beneficial Indus. Loan Corp., 337 U.S. 541, 546, 69 S.Ct. 1221, 1225-26, 93 L.Ed. 1528 (1949). 23 The denial of defendants' claim for qualified immunity in this case turned on an issue of law since "[t]he district court's determination that a genuine issue of material fact precluded it from granting summary judgment for appellant based on his claims of immunity is itself a question of law." So long as substantial factual development has occurred,1 factual disputes do not affect qualified immunity analysis since "that analysis assumes the validity of the plaintiff's version of the facts and then examines whether those facts 'support a claim of violation of clearly established law.' " Goddard v. Urrea, 847 F.2d 765, 769 (11th Cir.1988) (Johnson, dissenting). See Mitchell, 472 U.S. at 527-28, 105 S.Ct. at 2816 ("issue is a purely legal one: whether the facts alleged ... by the plaintiff ... support a claim of violation of clearly established law"); see also, Rich, 841 F.2d at 530 (same). This principle rests on the rationale that the asserted immunity is an immunity not merely from ultimate liability--it protects public officials from having to stand trial at all. Id. 24 The district court in this case determined there to be a genuine issue of fact whether the position allegedly created for Ms. Watson was the same as the position previously held by Ms. Horlock. If not, the court implied, then Ms. Horlock would not have a property interest in the new position and Dr. Fields' actions, while possibly a violation of Title VII and/or the ADEA, would not be a violation of a clearly established right to be free from the arbitrary and capricious deprivation of a property right. 25 If plaintiff could prove at trial that the new administrative "assistant" position is the same as the administrative "secretary" position--the district court implicitly held and appellee argues in this court--the defendants' actions (as alleged by plaintiff) would constitute a violation of clearly established rights; thus the defendants would not be entitled to immunity. 26 We are therefore faced with a purely legal determination of whether the district court's analysis is correct.2 Accordingly, we turn our attention to the merits of the immunity claim. 27 B. Qualified Immunity Analysis. 28 Under the Harlow v. Fitzgerald, 457 U.S. 800, 818, 102 S.Ct. 2727, 2738, 73 L.Ed.2d 396 (1982) qualified immunity test, "government officials performing discretionary functions generally are shielded from liability for civil damages insofar as their conduct does not violate clearly established statutory or constitutional rights of which a reasonable person should have known." 29 Zeigler v. Jackson, 716 F.2d 847 (11th Cir.1983) sets forth the two-part allocation of proof to be administered by courts of this circuit when applying the Harlow objective reasonableness test. First, the defendant public official must demonstrate that he was acting within the scope of his discretionary authority. Id. at 849. There seems to be no dispute in this case that defendants Fields and Watson satisfy this requirement for gaining qualified immunity. 30 After the defendant public official satisfies this burden, the plaintiff must show that the defendant lacked good faith in taking the discretionary actions. The plaintiff satisfies this burden through proof demonstrating that the defendants "violated clearly established constitutional law." Id. 31 As explained in Rich v. Dollar, 841 F.2d 1558, 1564 (11th Cir.1988), Mitchell v. Forsyth, 472 U.S. 511, 105 S.Ct. 2806, 86 L.Ed.2d 411 (1985) teaches that there are 32 two questions of law that we must decide in completing the second step of the Zeigler analysis: ascertainment of the law that was clearly established at the time of the defendant's action, and a determination as to the existence of a genuine issue of fact as to whether the defendant engaged in conduct violative of the rights established by that clearly-established law.3 33 These determinations go to questions of law that are subject to de novo review by this court. Rich, 841 F.2d at 1563. 34 In establishing whether the facts viewed in the light most favorable to the plaintiff support a finding that Dr. Fields and Ms. Watson violated "clearly established" rights, we must define the appropriate legal norms. In order to assert a valid claim for the deprivation of due process, a plaintiff must show (1) that the defendant acted under color of state law,4 (2) and deprived the plaintiff, (3) of life, liberty,5 or a property interest, (4) in a manner that is "without due process." Parratt v. Taylor, 451 U.S. 527, 536-37, 101 S.Ct. 1908, 1913-14, 68 L.Ed.2d 420 (1981). 35 In determining whether the plaintiff has asserted the deprivation of a "clearly established right" for the purpose of qualified immunity analysis, the Supreme Court in Anderson v. Creighton, 483 U.S. 635, 640, 107 S.Ct. 3034, 3039, 97 L.Ed.2d 523 (1987), has stated that "[t]he contours of the right must be sufficiently clear that a reasonable official would understand that what he is doing violates the right." It is insufficient, therefore, for the plaintiff merely to claim that the Fourteenth Amendment (and section 1983) provides appellee with a "clearly established right" to "due process of law" which the defendants should have been aware. The test of "clearly established law" must be applied at a level such that a reasonable public official would understand that the specific action he is taking violates the law. Id., 107 S.Ct. at 3039. 36 We agree with the district court that if, as alleged by the plaintiff-appellee, the defendants' actions were pretextual and motivated by discrimination on the basis of age and race, those actions were taken "without due process." But, as explained supra, this is only one element of a valid due process claim. 37 Plaintiff must demonstrate that she was deprived of a property interest when she was transferred from her administrative secretary position in the Superintendent's Office at GRHA to an administrative secretary position in the Planning and Development Office at GRHA. See Hearn v. City of Gainesville, 688 F.2d 1328, 1332 (11th Cir.1982), citing Bishop v. Wood, 426 U.S. 341, 343-47, 96 S.Ct. 2074, 2076-79, 48 L.Ed.2d 684 (1976). 38 The district court correctly found that if the administrative assistant position was newly created and not merely the secretary position by a contrived name, Ms. Horlock was not deprived of a property interest: the law does not recognize a property interest in a mere expectation. Board of Regents v. Roth, 408 U.S. 564, 577, 92 S.Ct. 2701, 2709, 33 L.Ed.2d 548 (1972). 39 The district court erred, however, in finding that if the administrative "assistant" position is the same as the administrative "secretary" position and Dr. Fields simply transferred Ms. Horlock for discriminatory reasons in order to place Ms. Watson in that same position, Ms. Horlock was, without more, deprived of a "clearly established" right. The district court mistakenly focused on the alleged misconduct undertaken by the defendants, rather than on Ms. Horlock's actual loss. 40 It is undisputed that Ms. Horlock still retains her Merit System job classification at GRHA. Nor does plaintiff-appellee claim that she was suspended without pay, demoted, or subjected to a reduction in salary.6 In essence, appellee claims that although she has the same job classification, carries out the same duties, and receives the same pay, she has lost the opportunity to work in one section of GRHA rather than the other and has thereby been deprived of a property interest. 41 The existence of a property interest is determined by reference to state law. Bishop, 426 U.S. at 344, 96 S.Ct. at 2077; Whalen v. City of Atlanta, 539 F.Supp. 1202, 1205 (N.D.Ga.1982). Under Georgia law, an employee does not have a property interest in a particular position within the employing organization. See Clark v. State Personnel Board, 252 Ga. 548, 550, 314 S.E.2d 658 (1984). More specifically, unless the aggrieved employee has suffered a dismissal from employment, demotion, disciplinary reduction in salary, or suspension without pay, she has not suffered an "adverse action" nor been deprived of any property interest. See Horne v. Skelton, 152 Ga.App. 654, 658, 263 S.E.2d 528 (1979), citing Sec. 15.101 of the Rules and Regulations of the [Georgia] State Merit System of Personnel Administration. 42 We need not make a definitive determination, however, as to whether under Georgia law and the facts of this case, such a lateral transfer deprived Ms. Horlock of a property interest. Under the Supreme Court's qualified immunity jurisprudence, as expressed in Harlow, Mitchell, Anderson, and interpreted by this court in Rich, our role is confined to determining whether the right asserted was "clearly established" at the time it was alleged to have been violated.7 In the context of this case, this requires the plaintiff-appellee to demonstrate the deprivation of a "clearly established" property interest. We are satisfied that the appellee has not carried this burden.8III. CONCLUSION 43 Even if, as Ms. Horlock alleges, the defendant public officials transferred her without due process and did not create a truly "new" position to which Ms. Horlock had no property interest, the defendants did not violate a "clearly established right" and thus were entitled to qualified immunity on the section 1983 due process claim. 44 The decision of the district court is REVERSED. The case is REMANDED to the district court with instructions to grant defendants' motion for summary judgment on the basis of their qualified immunity to the section 1983 due process claim. ORDER 45 Before TJOFLAT, Chief Judge, FAY, KRAVITCH, JOHNSON, HATCHETT, ANDERSON, CLARK, EDMONDSON and COX, Circuit Judges.*** BY THE COURT: 46 A majority of the judges in active service on the court's own motion having determined to have this case reheard en banc. 47 IT IS ORDERED that the above cause shall be reheard by this court en banc without oral argument during the week of June 11, 1990, on a date hereafter to be fixed. The clerk will specify a briefing schedule for the filing of en banc briefs. The previous panel's opinion is hereby VACATED. * See Rule 34-2(b), Rules of the U.S. Court of Appeals for the Eleventh Circuit ** Honorable Joseph T. Sneed, Senior U.S. Circuit Judge for the Ninth Circuit, sitting by designation 1 In Riley v. Wainwright, 810 F.2d 1006 (11th Cir.1987), the district court denied the qualified immunity claim because it determined that "substantial [additional] factual development" was needed before that court could adequately assess the qualified immunity claim. See Goddard v. Urrea, 847 F.2d 765, 769 (11th Cir.1988) (dismissing an appeal on this ground, citing Riley ). But see, Goddard, 847 F.2d at 769 (Johnson, dissenting) (arguing that Riley was distinguishable and that appellate court should have assumed facts as alleged by plaintiff and decided immunity issue) In this case, the district court decided as a matter of law that the material factual dispute over whether the "secretary" and "assistant" positions were the same precluded it from granting summary judgment. The district court did not find, as did the court in Riley, that general factual development was necessary for a proper evaluation of the immunity defense. See Rich, 841 F.2d at 1561 n. 1 (distinguishing Riley on this ground). 2 As is readily apparent from the discussion in the text, under the facts of this case the qualified immunity question turns squarely on whether Ms. Horlock had a property interest in her assignment as an administrative secretary in the Superintendent's Office at GRHA, rather than as an administrative secretary in the Development Section at GRHA. Appellee is therefore entirely correct to argue that appellants are attempting in this appeal to raise the very same property interest issue that we declined to evaluate on appellants' petition for discretionary review under 28 U.S.C. Sec. 1292. (There, appellants asserted that the lack of a property interest meant that the appellee failed to state a section 1983 due process violation.) Appellee is entirely incorrect, however, to conclude from the re-emergence of an issue that we declined to take up on discretionary appeal that we should or could decline to give that issue its due consideration when presented in a nondiscretionary appeal. As discussed in the text, appellants are entitled to have this court decide whether the district court correctly interpreted the law of qualified immunity as applied to the facts alleged by the plaintiff-appellee. If the property interest issue is crucial to our determination of whether defendants are entitled to qualified immunity--and appellee has not argued to the contrary--we must evaluate that question regardless of whether the same matter was presented on the merits of a discretionary appeal that we declined to hear. In essence, appellants came to this court the first time with no "ticket" to enter the "dance hall." We declined to let appellants "in" for free. Now that appellants have presented the appropriate "ticket," it matters not in the least that they have chosen to perform the same "dance." Cf. Godbold, Twenty Pages and Twenty Minutes--Effective Advocacy on Appeal, 30 Sw.L.J. 801, 805 (1976) (analogizing elements of appellate jurisdiction to "tickets" for review.) 3 See Mitchell, 472 U.S. at 528, 105 S.Ct. at 2816-17 (appellate court must determine whether legal norms allegedly violated were established at time of challenged actions); and Id. at 526, 105 S.Ct. at 2815-6 (plaintiff must uncover through discovery sufficient evidence to create genuine issue of whether defendant committed alleged acts) 4 Appellants concede that their actions were taken under color of state law. Appellant's Brief at 14 5 Plaintiff-appellee makes no claim that she was deprived of a life or liberty interest 6 Appellee did make passing reference in the statement of facts in her brief that she was not granted an annual merit increase in salary due to the low score she received from Ms. Watson on the Report of Performance. However, appellee did not argue that this amounted to a property interest; nor do we find it to be a property interest since it appears to be a mere expectation. See, Board of Regents v. Roth, 408 U.S. 564, 92 S.Ct. 2701, 33 L.Ed.2d 548 (1972) 7 See, e.g., Mitchell, 472 U.S. at 528, 105 S.Ct. at 2816-17 (appellate court need not determine whether plaintiff's allegations actually state a claim; need only determine whether legal norms allegedly violated were clearly established) 8 As we sought to make clear in our discussion of the procedural aspects of this case, our decision today affects only a small portion of this case--the section 1983 due process claim asserted against the individual defendants Drawing upon the magistrate's report, the district court made the following assessment of general complexion of this case: The magistrate aptly summarized the nature of this case: Plaintiff alleges defendants denied her her right to be free from discrimination in employment on the basis of race and age, the right to protest unlawful employment practices of the defendants, and the right not to be deprived of a protected property interest without due process of law. These are fundamental rights which have been established through the Title VII, ADEA, and the Civil Rights statutes which defendants should have been aware of. None of these statutes are of recent enactment. With the exception of the implicit assumption in this passage that misconduct violative of statutory and constitutional norms automatically leads to the deprivation of a property interest, we agree with the above assessment of the case. The lack of a "deprived property interest" does not make the alleged misconduct somehow proper or lawful. As the above statement makes clear, other rights and remedies may be available to Ms. Horlock. *** Senior U.S. Circuit Judge James C. Hill has elected to participate in further proceedings in this matter pursuant to 28 U.S.C. Sec. 46(c)
1
0.286683
0.433476
/* * MIT License * * Copyright (c) 2018 Kasun Vithanage * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ package util import ( "errors" "strings" ) var ( // ErrUnbalancedQuotes raised when quotes are not balanced in a string ErrUnbalancedQuotes = errors.New("unbalanced quotes") ) // ToString Convert an interface to string func ToString(i interface{}) string { if s, ok := i.(string); ok { return s } return "" } // SplitSpacesWithQuotes will split the string by spaces and preserve texts inside " " marks // error is returned when an unbalanced quote was found in the string func SplitSpacesWithQuotes(s string) ([]string, error) { var ret []string var buf = new(strings.Builder) // not in quote string buffer var scanned string var err error for pos := 0; pos < len(s); pos++ { char := s[pos] switch char { case ' ': if buf.Len() > 0 { ret = append(ret, buf.String()) buf.Reset() } case '"': pos, scanned, err = scanForByte(s, pos, '"') if err != nil { return nil, err } ret = append(ret, scanned) default: buf.WriteByte(char) } } if buf.Len() > 0 { ret = append(ret, buf.String()) } return ret, nil } func scanForByte(s string, pos int, r byte) (int, string, error) { var ret = new(strings.Builder) for pos++; pos < len(s); pos++ { char := s[pos] switch char { case '\\': if pos >= len(s)-1 { return 0, "", ErrUnbalancedQuotes } pos++ ret.WriteByte(s[pos]) case r: return pos, ret.String(), nil default: ret.WriteByte(char) } } return 0, "", ErrUnbalancedQuotes }
1
0.417774
0.748906
Lindke and Barnaby lived together from January 2012 until they broke up in September. According to what the Buffalo News gleaned from the suit, Barnaby possesses the 2008 Escalade but he had signed the title over to Lindke last January, after his license was suspended for a DWI—an incident in which police say he drove nine miles without one of his Porsche's front wheels. No money changed hands in the vehicle transaction, but Barnaby maintains that he continued to pay the insurance. The engagement ring was given to Lindke when Barnaby proposed sometime over the summer, though the couple can't even agree on what month that happened. Lindke's lawyer said the SUV was "a gift," and that Lindke intends to keep both the Escalade and the ring. Oh, Lindke also intends to file a countersuit alleging that Barnaby owes her money for work she had done creating his website, which is a thing that actually exists. Barnaby may have standing to get the ring back under New York state law, according to the Buffalo News, which also said Barnaby now works as a youth hockey coach.
1
0.157714
0.387909
extern NSString * const SERVICE_RESPONSE_ID; extern NSString * const SERVICE_RESPONSE_UUID; extern NSString * const SERVICE_RESPONSE_DEVICE_ID; extern NSString * const SERVICE_RESPONSE_ID_PRIMARY;
1
0.000721
-0.000009
It is generally known that a semiconductor having few crystal defects and good crystallinity is grown on a substrate by using a substrate lattice-matched with the semiconductor to be grown. There is, however, no substrate that is lattice-matched with a nitride semiconductor, has excellent crystallinity, and allows a nitride semiconductor crystal-to be stably grown. For this reason, there is no choice but to grow a nitride semiconductor on a substrate, e.g., a sapphire, spinnel, or silicon carbide substrate, that is not lattice-matched with nitride semiconductors. Various research institutes have made attempts to manufacture GaN bulk crystals that are lattice-matched with nitride semiconductors. However, it has only been reported that GaN bulk crystals having sizes of several millimeters are obtained. That is, any practical GaN bulk crystal like the one from which many wafers are cut to be actually used as substrates for the growth of nitride semiconductor layers has not been obtained. As a technique of manufacturing GaN substrates, for example, Jpn. Pat. Appln. KOKAI Publication Nos. 7-202265 and 7-165498 disclose a technique of forming a ZnO buffer layer on a sapphire substrate, growing a nitride semiconductor on the ZnO buffer layer, and dissolving and removing the ZnO buffer layer. However, since the ZnO buffer layer grown on the sapphire substrate has poor crystallinity, it is difficult to obtain a nitride semiconductor crystal having good quality by growing a nitride semiconductor on the buffer layer. In addition, it is difficult to continuously grow a nitride semiconductor thick enough to be used as a substrate on the thin ZnO buffer layer. When a nitride semiconductor electronic element used for various electronic devices such as a light-emitting diode (LED) device, a laser diode (LD) device, and a light-receiving device is to be manufactured, if a substrate made of a nitride semiconductor having few crystal defects can be manufactured, a new nitride semiconductor having few lattice defects and forming a device structure can be grown on the substrate. Therefore, the obtained device acquires greatly improved performance. That is, a high-performance device that has not been realized in the past can be realized. It is, therefore, an object of the present invention to provide a method of growing a nitride semiconductor crystal having excellent crystallinity. More specifically, it is an object of the present invention to provide a method of growing a nitride semiconductor crystal that can provide a nitride semiconductor substrate, a nitride semiconductor substrate, and a nitride semiconductor device formed on the nitride semiconductor substrate.
3
1.403212
0.895473
@fixture-OroCRMBundle:activities.yml Feature: Activity list feature In order to have ability manage contact activity As OroCRM sales rep I need to view, filter, paginate activities in activity list Scenario: Filter activities by type Given I login as administrator Given I go to Customers/Contacts And click view Charlie in grid And there are 10 records in activity list When I check "Task" in Activity Type filter Then there are 2 records in activity list When I check "Email" in Activity Type filter Then there are 4 records in activity list When I check "Call" in Activity Type filter Then there are 6 records in activity list When I check "Note" in Activity Type filter Then there are 8 records in activity list When I check "Calendar event" in Activity Type filter Then there are 10 records in activity list Scenario: Paginate activity list Given the following note: | activityTargets | createdAt | updatedAt | | [@contactCharlie] | <dateTimeBetween("now", "now")> | <dateTimeBetween("now", "now")> | And I reset Activity Type filter And I shouldn't see "Merry Christmas" email in activity list When go to older activities Then I should see "Merry Christmas" email in activity list Scenario: Filter activities by date range Given I go to newer activities And there are 10 records in activity list And I shouldn't see "Merry Christmas" email in activity list When I filter Date Range as between "2015-12-24" and "2015-12-26" Then I should see "Merry Christmas" email in activity list And there is one record in activity list
1
1.434162
0.060199
Establishment of a mass screening method of sand fly vectors for Leishmania infection by molecular biological methods. Surveillance of the prevalence of Leishmania and its vector, sand fly species, in endemic and surrounding areas is important for prediction of the risk and expansion of leishmaniasis. In this study, a method for the mass screening of sand flies for Leishmania infection was established. This method was applied to 319 field-captured specimens, and 5 positive sand flies were detected. Sand fly species were identified by polymerase chain reaction (PCR)-restriction fragment length polymorphism (RFLP) of the18S rRNA gene, and all the positive flies were Lu. hartmanni. Furthermore, cytochrome b (Cyt b) gene sequence analyses identified all the parasites as Endotrypanum species including a probable novel species. Because the method requires minimum effort and can process a large number of samples at once, it will be a powerful tool for studying the epidemiology of leishmaniasis.
3
1.915519
0.991566
Quick and Easy Pate Recipe Even people who say they don't like liver love this easy spread. It is always a hit at parties. It can be made a couple of days ahead, which is a big plus during the holiday season. Recipe from the good folks at Bacardi Rum.
0
0.267764
0.149274
You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here. Are you forgetting that I am a mother with a young son? I would no more go into the men's dressing room to help him pick out clothes than I would go into the men's locker room to help him get ready to swim. This is why I buy his clothing and try it on him at home and keep the receipt. If we ever have to buy him something that HAS to be tried on at the store, I'd either find a place with family dressing rooms or his dad would do it. Probably the former, since his dad hates shopping even more than I do. I've seen mothers do it was my point. In fact I even had one walk in on me once, which was somewhat embarrassing for both parties. Just because you are perfectly "courteous" doesn't mean everyone is. Are you arguing for my side now? You had a woman in the men's dressing room with you, it was embarrassing, you wish she hadn't come in? That is just about exactly what I have been saying! No, its that damn Fe! Bad Fe! No harmonizing with Ivy's postion! I don't even know what your argument is anymore. You think people should be more courteous when it comes to dressing rooms? Fine, but the only thing people have direct control over is themselves. If you want some people who shouldn't be in a dressing room to learn that it is "discourteous" then request that the management ask them to leave. Next time they will probably know better. Also, to Kiddo especially, but to a bunch of other people, please take a metaphorical step back before continuing to post, (since apparently some people have stronger opinions about this subject, that are rubbing up against each other kind of harshly. Remember, though, that others will have a different experience and opinions.) For the most part I've been enjoying the debate with Kiddo here. I get a pretty friendly vibe from it. I was on debate team in college and sometimes I appreciate a sparring partner. But yeah, this thread seems to be getting pretty volatile so I echo Zergling's urgings to try and remain civil. The one who buggers a fire burns his penis-anonymous graffiti in the basilica at Pompeii No, it is called common sense. If a women was in a man's clothing store and she wanted to try on clothes and there was only one dressing room, then I feel it wouldn't be tasteless for her to use it. It goes either way, and it isn't based on any value, just practicality. Okay, then... do you think other men feel this way, or is this just what you personally feel? What makes you believe it's common sense and not your opinion? Do common sense and practicality carry more weight than feelings, or do feelings carry more weight? We have to decide that before moving forward. It seems silly to me that somebody is suppose to go out of there way for what might not even be an issue. If you percieve it as discourteous, then fine, but it's a free country and people also have the right to politely request that a person leave the dressing room. It's an inconvenience either way. I'm just saying, there has to be a rule one way or the other for consistency's sake, because that would certainly constitute common sense, and resolve the issue (assuming that should be the standard). Also, it seems somewhat strange that you are demanding to be able to do something for practical reasons (thus disregarding other's feelings), yet turning around and using your feelings as a reason why you should be accommodated by these people enough to have a request made of you instead of you implicitly understanding it. Have you studied rhetoric or logic much, Kiddo? I believe you would do well to do so. You have the passion, just not enough of the skill. maybe it's different here, or I'm misunderstanding you, but I often see men in the dressing room area. Not in the actual stalls of course, but in the big room the stalls open into, sure. That room's usually fairly visible to the rest of the store, for that matter. it doesn't seem like a big deal to me. I don't usually (ever?) see women coming out of the stalls half-dressed. That's pretty much been my experience. I've only ever been in one store (a cheap department store) which has clearly defined changing rooms for men and women. Jeans shops catering to both men and women tend to have unisex change rooms here IIRC. The only places I feel uncomfortable with men hanging round dressing rooms are the lingerie sections of big department stores or lingerie shops - but maybe that's just my personal hang-up. I'm sorry. I did not realize that it was common knowledge that if you are a male trying on women's clothes in a store that only has one dressing room, that you are suppose to go find another store out of respect for all the women who might or might not care whether you use that dressing room. Forgive me for not getting that memo from the courtesy police. Kiddo did you even read the OP or did you just see me mention gay and transgender and you got all wild-eyed and foamy? Where did you get this from? 1. I was in a woman's clothing store. Not Old Navy, not Gap, not Express, no unisex changing rooms. I don't care about unisex changing rooms, but I assume in a woman's clothing store it's a single sex dressing room. That is not an unreasonable assumption. Nor do I think it's unreasonable or prudish to ask that in a woman's clothing store, I see women exclusively using the dressing rooms. You argument sounds idiotic. 2. There were several dressing rooms in both stores and one of the stores even had a lounge area for people to sit in. If the woman was so hellbent on having her boyfriend's (or whoever he was) she could have walked out to the waiting are and not had him sitting in the DRESSING ROOM PROPER.
1
0.303368
0.255988
Immunochromatography can be utilized to perform tests of various diseases simply and easily. Because an analyte in a specimen is usually a slightly-existing substance, such as an influenza virus, HBs antigen or the like, there are demands for enhancement of sensitivity of immunochromatographic tests. Also, there are demands for a test device that permits rapid detection of an analyte in a specimen. Conventional test devices, however, have a problem that a long time is required for the elution of a labeling substance from a label holding member and thus for obtaining test results.
2
1.041974
0.40261
Expanding Muni's Kids Ride Free program to include 18-year-old low- and middle-income youths would cost an additional $1.1 million a year, while eliminating income limits altogether would nearly double the $3 million annual cost, according to a new city report being released as the transit agency considers whether to continue the pilot program at all. City officials could cover the increased costs by raising the sales tax, imposing a special tax on private shuttle buses that use city property, or establishing a local vehicle license fee, according to the report by the Board of Supervisors Budget and Legislative Analyst. Related Stories The report was requested by Supervisor David Campos, who pushed for the pilot program established in March. It allows San Francisco kids between the ages of 5 and 17 whose parents make less than the Bay Area median - $103,000 for a household of four - to ride Muni for free through June of this year. About 78 percent of the estimated 40,000 eligible young people have registered for the program, the report said. The pilot project for what's also known as Free Muni for Youth has not been nearly as expensive as Muni officials estimated when they were debating the program two years ago. The program itself costs about half of what Muni officials initially projected, he said, and it has not led to an expensive increase in service, the report found. The Municipal Transportation Agency should make the program permanent and consider including 18-year-olds who meet the existing income requirements, Campos said, since many young people turn 18 while they are still in high school. "I think this report confirms what we have been hearing - the program has been a resounding success," Campos said. "I think the report gives a lot of policy reasons for continuing the program and, if anything, we need to consider expanding it to include 18-year olds." Continue or change Over the next two months, the MTA's Board of Directors will decide whether to continue or change the $2.9 million program next fiscal year as part of its larger budget debate. The board is holding its first public hearing on the proposed MTA budget Tuesday at 1 p.m.; it will approve a budget by the end of April and send it to the supervisors, who can accept or reject but not alter the proposal. Paul Rose, an MTA spokesman, said the agency appreciates the analysis of the free Muni program and will consider its findings as the board works to adopt a budget. In the report, the analyst considered a number of alternatives to the current setup, including a $1.3 million price tag to expand the program to include 18-year-olds who meet the current income requirement. If Muni was to offer free rides to all youths ages 5 to 17, regardless of income, it would cost the city an additional $2.3 million annually. If the income requirement was scrapped and the age was expanded to include 18-year-olds, the annual price tag would increase by another $1.4 million. All told, expanding the program to include all city residents between ages 5 and 18 would cost the city $6.7 million a year. 40,000 kids helped Bob Allen, transportation justice director at Urban Habitat - an advocacy group for low-income communities - said the report shows how wildly successful the first year of the program has been. He cited the high level of participation and low cost as reasons the free ride program should be made permanent and expanded to include all youths. "We are talking about (almost) 40,000 kids, and this is beyond school, this is helping them get to after-school programs, to jobs - it's not just replacing the old school bus service," Allen said. "The impact of this investment is huge. You will hear from some people, 'We can't afford this,' but how can we afford not to invest this small amount?" As families in San Francisco are squeezed by increasing housing prices and other rising costs, he said, this is an easy, inexpensive way to make the city more affordable. The budget being considered by the MTA is $915.4 million for the year that starts July 1. "You can't build housing in a day, but this is an instant benefit," he said. "We are creating the next generation of transit riders - I can't think of a better way to do that than by making this permanent and hopefully expanding it."
2
1.008474
0.529966
Latest News Looking back at Demon's Souls; is it better than Dark Souls?... yes PlayStation 3 exclusive, Demon’s Souls, finally made its way to the PlayStation Network for download recently. Given that the sequel, Dark Souls, was the game that really launched the series popularity, the PSN release gave players the opportunity to play the lesser-known sibling for the first time and do a compare and contrast. If you haven’t picked up Demon’s Souls yet, you should take the opportunity now because it is a vastly superior game to Dark Souls. That’s not to suggest that Dark Souls is a poor game, of course, it’s just that Demon’s Souls has a greater purity of vision. In other words, Demon’s Souls is darker (ironic, given the names of both games), crueler, and ultimately more rewarding. Perhaps the most stand-out feature for me that elevates Demon’s Souls above its sequel is that Demon’s Souls actively discourages grinding. Dark Souls places convenient campfires throughout its world – often in places with plenty of respawnable enemies to kill over and over to power up for the tougher fights further on. Demon’s Souls forces players to backtrack significantly to the game’s “Nexus” to do the same thing, creating a lot of dull downtime every time a player wants to preserve the work they have done and recover from the enemies they have faced. For those who haven’t played the game, the Nexus is a central “safe zone” that players can warp to from each of Demon’s Souls’ hostile environments from a few specific teleport points. The Nexus is the only place players can spend the souls (the in-game currency obtained by defeating enemies) to level-up their character. Just like with Dark Souls, dying in one of the game’s levels will make a player lose all of his or her souls, but because the teleport points are spaced so far apart at times, it’s not always an easy matter to jump out, level up and jump back in again. This backtracking would be seen by some as a game design weakness compared to Dark Soul’s campfire system that keeps players in the heat of the action at all times. I disagree. Demon’s Souls subtly drives players forward (after all, who wants to dully backtrack?), and the tension of the game is higher because it encourages risky behaviour. Backtrack for ten minutes to cash in the souls you have collected, or risk taking on a boss under-levelled because you know if you beat him a portal to the game’s Nexus will be sitting there ready to be activated with no backtracking necessary? There’s a risk/ reward dynamic to Demon’s Souls that is far more prevalent than the relative safety of the campfire system of the Dark Souls games. Less tangible is the comparatively thick atmosphere of Demon’s Souls. Dark Souls is, by comparison to the Demon’s Souls aesthetic, a happy Disney cartoon. The claustrophobia and heavy darkness that pervades every environment within Demon’s Souls is uncompromisingly bleak. It’s relentless, it’s stifling. It’s also not necessarily fun. To me, Dark Souls is a far more ‘casual play’ game – not because it is not as challenging (the actual combat side of things is balanced about the same in both games), but because I need to be in a far more committed mood to feel like playing Demon’s Souls, and it is harder to play for long sessions. Again, that is not because I am not enjoying it, but because after a few hours the atmosphere of Demon’s Souls makes a funeral seem comparatively entertaining. In Dark Souls there are moments of relief; a ray of sunlight breaking through the clouds over a breathtaking vista – free of enemies and other such threats. All that awaits around the corner in Demon’s Souls is an even darker corridor or an even more fetid swamp. But Demon’s Souls is the more “pure” vision. It’s a game that makes no compromises and no apologies for what it is. As such, it’s a rare kind of game in the modern industry that even its successor has to defer to. Demon’s Souls might just be the greatest example of a visionary work to come from this generation of consoles. Title : Looking back at Demon's Souls; is it better than Dark Souls?... yes Pre-order Game Art! Game Art is a book that takes a look at the art and artistry in gaming, writen by DDNet editor-in-chief, Matt Sainsbury. Pre-order at No Starch's website (click on the cover above) and enter the code "DDNET" for 30% off!
1
0.873584
0.544037
Virginia Department of Elections The Virginia Department of Elections is an agency that administers elections in Virginia. Its duties include maintaining a voter registration system. The Department is led by a three-member body, the State Board of Elections. State law provides, "The State Board, through the Department of Elections, shall supervise and coordinate the work of the county and city electoral boards and of the registrars to obtain uniformity in their practices and proceedings and legality and purity in all elections." The Department's current commissioner is Christopher E. "Chris" Piper. References External link Official site Category:Virginia elections Category:Election commissions in the United States
2
1.6259
0.717928
To build: If your python binary is in a non-standard location or has a non-standard name, run the following instead: export PYTHON=/path/to/python $PYTHON ./configure make make install Prerequisites (Windows only): * Python 2.6 or 2.7 * Visual Studio 2010 or 2012 Windows: vcbuild nosign You can download pre-built binaries for various operating systems from http://nodejs.org/download/. The Windows and OS X installers will prompt you for the location in which to install. The tarballs are self-contained; you can extract them to a local directory with:
2
0.569777
0.508165
using System; namespace Zio { /// <summary> /// The <see cref="EventArgs"/> base class for file and directory events. Used for /// <see cref="WatcherChangeTypes.Created"/>, <see cref="WatcherChangeTypes.Deleted"/>, /// and <see cref="WatcherChangeTypes.Changed"/>. /// </summary> /// <inheritdoc /> public class FileChangedEventArgs : EventArgs { /// <summary> /// The type of change that occurred. /// </summary> public WatcherChangeTypes ChangeType { get; } /// <summary> /// The filesystem originating this change. /// </summary> public IFileSystem FileSystem { get; } /// <summary> /// Absolute path to the file or directory. /// </summary> public UPath FullPath { get; } /// <summary> /// Name of the file or directory. /// </summary> public string Name { get; } public FileChangedEventArgs(IFileSystem fileSystem, WatcherChangeTypes changeType, UPath fullPath) { if (fileSystem == null) throw new ArgumentNullException(nameof(fileSystem)); fullPath.AssertNotNull(nameof(fullPath)); fullPath.AssertAbsolute(nameof(fullPath)); FileSystem = fileSystem; ChangeType = changeType; FullPath = fullPath; Name = fullPath.GetName(); } } }
2
1.23341
0.796829
Q: How to use other application.conf in tests than in prod code? im trying to test PersistentActor with scalatest but I dont know how to point test code to use something like application-test.conf instead application.conf ( I want to change leveldb store for events to in memory store ). Is there any convenient way to do this? A: You could define another application.conf in your test resources: src/test/resources/application.conf This way, you can have test related configuration that will be used by default in your tests. If you still require multiple configuration settings among your tests, you can always have more than one configuration file in the test resources and explicitly use the one you need: class PersistentActorSpec extends TestKit(ActorSystem("test-system", ConfigFactory.load("application-test")))
1
1.717045
0.650604
Wednesday, February 29, 2012 One thing is for sure: the weather predictors around here have only a tiny advantage over the rest of us. They are really only sure whether it will snow or rain or be sunny about five minutes before it happens. This winter, looking out the window is about as helpful as checking the weather forecast. That said, there are quite a few members of my family who were overjoyed to wake to the weatherman's miscalculation of the rain/snow line. And although that meant some shovelling, we've had so little snow this winter that even on the eve of March they were happy to welcome it. I have to say that this mild Minnesota winter has been a real blessing to me. Last winter and its 144 days of snow cover wore me down to say the least. This winter it hasn't been really cold (I have lived here nearly 13 years, mind you) for very many days in a row. We've had a few below zero mornings, but we have had so many days in the 30's it's hard to believe. And there has been hardly. any. snow. Amazing! It's a drought that started in the summer and has continued through the winter, which is certainly unusual in our, albeit limited, Minnesota experience. And I couldn't be more thankful. Just a little rest from the pain that winter is, and I can wake up and be happy to see a big pile of snow. Come to me, all who labor and are heavy laden, and I will give you rest. Take my yoke upon you, and learn from me, for I am gentle and lowly in heart, and you will find rest for your souls. For my yoke is easy, and my burden is light.” (Matthew 11:28-30 ESV) Friday, February 24, 2012 Status updates are about all the writing I have had time for lately! One day I will take some time to write a real blog post, but for now, here's what has been going on in our lives from the Facebook point of view! 1/28: Made it to the State Lego Tournament with everything but the camera! Go Cavemen! 1/28: Cavemen win! 1/28: Hooray for the Cavemen! First Lego League Minnesota state champions! 1/30: I will sing to the LORD as long as I live; I will sing praise to my God while I have being. May my meditation be pleasing to him, for I rejoice in the LORD. (Psalm 104:33-34 ESV) 1/30: Have you heard that Edwin will be taking the PolarBear Plunge on March 3? If you would like to sponsor him, message me! Support the Special Olympics! Or come to Lake Calhoun and watch! 1/31: So thankful for the gift of a mild January in Minnesota! I really needed that after last winter! 1/31: I'm looking for some upbeat praise & worship music with theologically-sound lyrics, especially with lyrics straight from the Bible...Any suggestions? Search This Blog About Me I am a Christian southern girl living in Minnesota, a wife of 22-plus years, and a homeschooling mom of 5 kiddos, ages 10 to 17. I started running nearly three years ago, and began to like it just before getting injured. I'm starting again, which seems to be what you do in life. Life Is Different Here in many, many ways, and I'd like to tell you how...
1
0.777071
0.100563
1. Field of the Invention The present invention relates to a semiconductor device, and more particularly to a transistor using an SOI (Semiconductor On Insulator) substrate. 2. Description of the Background Art An ASIC (Application Specific Integrated Circuit) is desired to operate at high speed with low power consumption, like a logic LSI such as a microprocessor, and so is a gate array which is a form of the ASIC. FIG. 36 is a cross section showing a structure of a bulk NMOS transistor. A junction capacitance C1 caused by a depletion layer 104 existing between an Si substrate 101 and a source region 102 (or a drain region 103) is large and a wiring capacitance C2 between a metal wire 105 disposed on an NMOS transistor and the Si substrate 101 through a LOCOS oxide film 108 is also large. In this situation, proposed is a use of an SOI layer in a gate array. FIG. 37 is a cross section showing a structure of an SOI NMOS transistor. Since a transistor on an SOI layer 106 has a thick buried oxide film 107 thereunder, both the junction capacitance C1 and the wiring capacitance C2 thereof are smaller than those of the bulk NMOS transistor. That allows higher-speed operation and lower power consumption. Moreover, a transistor of which the source and drain are formed in the SOI layer 106 (referred to as "SOI transistor" hereinafter) has a semiconductor (body) 110 which is in an electrically-floating state between a source region 102 and a drain region 103. A "body effect", which refers to an action that a threshold value Vth of the transistor rises due to a potential difference between the Si substrate 101 and the source region 102 when a source potential rises (in case of an NMOS transistor), is not caused in the SOI transistor unlike the bulk transistor. Therefore, the SOI transistor may be always used with a small threshold value and operate with low voltage. Thus, the SOI transistor needs only low power consumption. In the SOI transistor, however, when the source-to-drain voltage reaches a certain level or more, impact-ionized charges near the drain region 103, e.g., positive holes in the NMOS transistor, do not escape into the Si substrate 101 to raise a potential at the body 110 which acts as a base of an NPN type parasitic bipolar transistor consisting of the body 101, the source region 102 and the drain region 103 since the body 101 where a channel is formed is in the floating state. Then, a current driven by the bipolar transistor is superposed on an original current of the SOI transistor. FIG. 38 is a graph showing a rise in current due to a parasitic bipolar effect. To avoid the parasitic bipolar effect, it is needed to fix the potential at the body 110 of the SOI transistor. FIG. 39 is a plan view of a structure of field-shield isolation (referred to as "FS isolation" or "FS-isolated structure" hereinafter). FIGS. 40 and 41 are cross sections of FIG. 39 taken along the lines XXXX--XXXX, and XXXXI--XXXXI, respectively. An active region 111 having a width Lf is formed to become the source region 102 or the drain region 103. The active region 111 is provided with a source-drain contact 96 to establish an electrical connection with a wire (not shown). For simple illustration, a gate contact 97 of a gate electrode 109 is not shown in FIG. 39. Similarly to a gate isolation by fixing a potential of the gate electrode 109 (e.g., by connecting the gate electrode 109 to the around GND through the gate contact 97 in a case of NMOS transistor) made in a direction of arrangement of the active region 111 (in vertical direction of FIG. 39), the FS isolation is a device isolation with an FS gate 91 achieved in a direction perpendicular to the vertical direction (in a horizontal direction of FIG. 39). Specifically, the FS gate 91, like the gate electrode 109, is opposed to the SOI layer 106 with an insulative interlayer film interposed therebetween on both sides of the NMOS transistor, and when it is connected to the around GND, the NMOS transistor is isolated in the horizontal direction. FIG. 42 is a cross section showing an isolation using a LOCOS oxide film (referred to as "LOCOS isolation" hereinafter) in the horizontal direction. When the LOCOS isolation is used, the SOI layer 106 is separated in the horizontal direction by the LOCOS oxide film 108 and hence it is impossible to provide a contact for supplying the SOI layer 106 with a predetermined potential. In contrast, when the FS isolation is used, the SOI layer 106 can extend also in the horizontal direction and hence it is possible to supply the SOI layer 106 with the predetermined fixed potential in the extension of the SOI layer 106. In this case, it is necessary to provide an FS gate contact 92 for supplying the FS gate 91 with the predetermined potential and a contact plug 93 for FS isolation as shown in FIG. 41, and on the other hand, it is necessary to provide a body contact 94 for supplying the body with the predetermined potential and the contact plug 93 (a region indicated by hatching can have higher impurity concentration in the SOI layer 106 with which the contact plug 93 comes into contact). Therefore, there is a need for a chipped portion 95 at a position to provide the body contact 94 in the FS gate 91, as shown in FIGS. 39 and 40. This gate array, which supplies the body 110 with the predetermined potential, allows reduction in wiring capacitance, ensures high-speed operation and low power consumption and further prevents the parasitic bipolar effect. For example, the SOI transistor with FS-isolated structure may be used in an inverter. FIG. 43A shows a symbol of an inverter and FIG. 43B shows a specific configuration thereof. The inverter consists of a PMOS transistor P1 and an NMOS transistor N1 connected in series between a potential point supplying a potential Vcc and the ground GND. Specifically, the source of PMOS transistor P1 is fixed to the potential Vcc and the source of NMOS transistor N1 is fixed to the ground potential GND. In this configuration, even if the SOI transistors with FS-isolated structure are used as the transistors P1 and N1 and the bodies of the transistors P1 and N1 are fixed to the potential Vcc and the ground potential GND, respectively, no potential difference exists between the respective bodies and sources and hence no body effect works ill on the inverter. However, there may be a case where it is preferable not to fix the potential at the body 110 so as not to lose the advantage of no body effect. FIG. 44A shows a symbol of an NAND circuit and FIG. 44B shows a specific configuration thereof. The NAND circuit consists of PMOS transistors P1 and P2 connected in parallel and NMOS transistors N1 and N2 connected in series between the potential point supplying the potential Vcc and the ground GND. Since the sources of transistors P1, P2 and N2 are supplied with the potential Vcc, Vcc and the ground potential GND respectively, if the bodies of the transistors P1, P2 and N2 are supplied with the potential Vcc. Vcc and the ground potential GND respectively, no body effect is produced on the transistors. However, there may be a case where a potential higher than the ground potential GND is applied to the source of the transistor N1 which is not supplied with a fixed potential, and when the ground potential GND is applied to the body of the transistor N1, a threshold voltage rises due to the body effect. In this situation, the NAND circuit can not operate with low voltage and hardly avoids slow operation. FIG. 45 shows a layout where vertical alignments of the active regions (referred to as "fields" hereinafter) are arranged in the horizontal direction (this layout is referred to as "master layout" hereinafter). As illustrated in disregard of the gate electrodes, the fields 10a to 10i are seen rectangular. The characters "P" and "N" in rectangles indicate fields to be provided with the PMOS transistor and the NMOS transistor, respectively. The fields provided with the PMOS transistor and the NMOS transistor are herein termed "p-type field" and "n-type field", respectively. In the background art, when the FS isolation is used to supply the body 110 with the predetermined potential, all of the fields are FS-isolated (the fields 10a to 10i of FIG. 45 are each FS-isolated, and the field with FS-isolated structure is referred to as "FS-isolated field" hereinafter). Since the SOI layers 106 in a field are supplied with the predetermined potential in common (e.g., the ground potential GND in the NMOS transistor), the potentials at the bodies 110 of the transistors in the same field are all fixed. Then, there arises a problem of degradation in high-speed operation of the SOI transistor in a circuit including a transistor having the source which is not supplied with the fixed potential, such as an NAND circuit. Furthermore, by supplying the gate electrode and the body with the same potential, it may become possible to positively utilize the parasitic bipolar effect to earn a driving current of the SOI transistor.
3
0.830593
0.809064