content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Koki Short Manageable Kubernetes manifests through composable, reusable syntax Motivation The description format for Kubernetes manifests, as it stands today, is verbose and unintuitive. Anecdotally, it has been: - Time consuming to write - Error-prone, hard to get right without referring to documentation - Difficult to maintain, read, and reuse e.g. In order to create a simple nginx pod that runs on any host in region us-east1 or us-east2, here is the Kubernetes native syntax: apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx_container image: nginx:latest affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: k8s.io/failure-domain operator: In values: - us-east1 - matchExpressions: - key: k8s.io/failure-domain operator: In value: - us-east2 The Short format is designed to be user friendly, intuitive, reusable, and maintainable. The same pod in Short syntax looks like pod: name: nginx labels: app: nginx containers: - name: nginx image: nginx:latest affinity: - node: k8s.io/failure-domain=us-east1,us-east2 Our approach is to reframe Kubernetes manifests in an operator-friendly syntax without sacrificing expressiveness. Koki Short can transform Kubernetes syntax into Short and Short syntax back into Kubernetes. No information is lost in either direction. For more information on Koki Short transformations, please refer to Resources. Modular and Reusable Koki Short introduces the concept of modules, which are reusable collections of Short resources. Any resource can be reused multiple times in other resources and linked resources can be managed as a single unit on the Koki platform. Any valid koki resource object can be reused. This includes subtypes of top-level resource types. For example, here's module called affinity_east1.yaml: affinity: - node: k8s.io/failure-domain=us-east-1 This affinity value can be reused in any pod spec: imports: - affinity: affinity_east1.yaml pod: name: nginx labels: app: nginx containers: - name: nginx image: nginx-latest affinity: ${affinity} # re-use the affinity resource here For more information on Koki Modules, please refer to Modules. Getting started In order to start using Short, simply download the binary from the releases page. #start with any existing Kubernetes manifest file $$ cat kube_manifest.yaml apiVersion: v1 kind: Pod metadata: name: podName namespace: podNamespace spec: hostAliases: - ip: 127.0.0.1 hostnames: - localhost - myMachine affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: - matchExpressions: - key: k8s.io/failure-domain operator: In value: us-east1 containers: - name: container image: busybox:latest ports: - containerPort: 6379 hostPort: 8080 name: service protocol: TCP resources: limits: cpu: "2" memory: 1024m requests: cpu: "1" memory: 512m #convert Kubernetes manifest into a Short syntax representation $$ short -f kube_manifest.yaml pod: name: podName namespace: podNamespace affinity: - node: k8s.io/failure-domain=us-east1 host_aliases: - 127.0.0.1 localhost myMachine containers: - image: busybox cpu: min: "1" max: "2" mem: min: "512m" max: "1024m" expose: - port_map: "8080:6379" name: service #input can be json or yaml $$ short -f kube_manifest.json -f kube_manifest2.yaml -f kube_multi_manifest.yaml #stream input $$ cat kube_manifest.yaml | short - #revert to kubernetes type $$ short -k -f koki_spec.yaml #-k flag denotes that it should output Kubernetes manifest For more information, refer to our getting started guide. Contribute Koki is completely open source community driven, including the roadmaps, planning, and implementation. We encourage everyone to help us make Kubernetes manifests more manageable. We welcome Issues, Pull Requests and participation in our weekly meetings. If you'd like to get started with contributing to Koki Short, read our Roadmap and start with any issue labelled help-wanted or good-first-issue
https://docs.koki.io/short/
2018-02-18T05:02:58
CC-MAIN-2018-09
1518891811655.65
[]
docs.koki.io
After cleaning malware, OfficeScan agents back up malware data. Notify an online agent to restore backed up data if you consider the data harmless. Information about which malware backup data was restored, the affected endpoint, and the restore result available in the logs. For Unsuccessful restorations, you can attempt to restore the file again on the Central Quarantine Restore Details screen by clicking Restore All.
http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/scanning-for-securit/security-risk-logs/viewing-virusmalware/viewing-central-quar.aspx
2018-09-18T15:16:11
CC-MAIN-2018-39
1537267155561.35
[]
docs.trendmicro.com
: 2 Commentscomments.show.hide Nov 12, 2013 Petr Sakař To specify dependency on module in some slot in MANIFEST.MF append slot name behind colon. For example module org.springframework.spring slot snowdrop May 09, 2014 Jochen Riedlinger If I have MANIFEST.MF with "Dependencies:" AND a jboss-deployment-structure.xml file in my EAR, which one has precedence?
https://docs.jboss.org/author/display/AS72/Class+Loading+in+AS7
2018-09-18T16:21:02
CC-MAIN-2018-39
1537267155561.35
[]
docs.jboss.org
docs.rs failed to build itertools-0.0.6; I recommend shortening.
https://docs.rs/crate/itertools/0.0.6
2018-09-18T16:14:28
CC-MAIN-2018-39
1537267155561.35
[]
docs.rs
Pyramid Introduction¶ Pyramid is a Python web application framework. It is designed to make creating web applications easier. It is open source. Pyramid follows these design and engineering principles: - Simplicity - Pyramid is designed to be easy to use. You can get started even if you don't understand it all. And when you're ready to do more, Pyramid will be there for you. - Minimalism - Out of the box, Pyramid provides only the core tools needed for nearly all web applications: mapping URLs to code, security, and serving static assets (files like JavaScript and CSS). Additional tools provide templating, database integration and more. But with Pyramid you can "pay only for what you eat". - Documentation - Pyramid is committed to comprehensive and up-to-date documentation. - Speed - Pyramid is designed to be noticeably fast. - Reliability - Pyramid is developed conservatively and tested exhaustively. Our motto is: "If it ain't tested, it's broke". - Openness - As with Python, the Pyramid software is distributed under a permissive open source license. Why Pyramid?¶ In a world filled with web frameworks, why should you choose Pyramid? Modern¶ Pyramid is fully compatible with Python 3. If you develop a Pyramid application today, you can rest assured that you'll be able to use the most modern features of your favorite language. And in the years to come, you'll continue to bed working on a framework that is up-to-date and forward-looking. Tested¶ Untested code is broken by design. The Pyramid community has a strong testing culture and our framework reflects that. Every release of Pyramid has 100% statement coverage (as measured by coverage) and 95% decision/condition coverage. (as measured by instrumental) It is automatically tested using Travis and Jenkins on supported versions of Python after each commit to its GitHub repository. Official Pyramid add-ons are held to a similar testing standard. We still find bugs in Pyramid, but we've noticed we find a lot fewer of them while working on projects with a solid testing regime. Documented¶ The Pyramid documentation is comprehensive. We strive to keep our narrative documentation both complete and friendly to newcomers. We also maintain the Pyramid Community Cookbook of recipes demonstrating common scenarios you might face. Contributions in the form of improvements to our documentation are always appreciated. And we always welcome improvements to our official tutorials as well as new contributions to our community maintained tutorials. Supported¶ You can get help quickly with Pyramid. It's our goal that no Pyramid question go unanswered. Whether you ask a question on IRC, on the Pylons-discuss mailing list, or on StackOverflow, you're likely to get a reasonably prompt response. Pyramid is also a welcoming, friendly space for newcomers. We don't tolerate "support trolls" or those who enjoy berating fellow users in our support channels. We try to keep it well-lit and new-user-friendly. See also See also our #pyramid IRC channel, our pylons-discuss mailing list, and Support and Development. What makes Pyramid unique¶ There are many tools available for web development. What would make someone want to use Pyramid instead? What makes Pyramid unique? With Pyramid". We don't believe you should have to make this choice. You can't really know how large your application will become. You certainly shouldn't have to rewrite a small application in another framework when it gets "too big". A well-designed framework should be able to be good at both. Pyramid is that kind of framework. Pyramid provides a set of features that are unique among Python web frameworks. Others may provide some, but only Pyramid provides them all, in one place, fully documented, and à la carte without needing to pay for the whole banquet. Build single-file applications¶ You can write a Pyramid application that lives entirely in one Python file. Such an application is easy to understand since everything is in one place. It is easy to deploy because you don't need to know much about Python packaging. Pyramid allows you to do almost everything that so-called microframeworks can in very similar ways.() See also See also Creating Your First Pyramid Application. Configure applications with decorators¶ Pyramid allows you to keep your configuration right next to your code. That way you don't have to switch files to see your configuration. For example: from pyramid.view import view_config from pyramid.response import Response @view_config(route_name='fred') def fred_view(request): return Response('fred') However, using Pyramid configuration decorators does not change your code. It remains easy to extend, test, or reuse. You can test your code as if the decorators were not there. You can instruct the framework to ignore some decorators. You can even use an imperative style to write your configuration, skipping decorators entirely. See also See also Adding View Configuration Using the @view_config Decorator. Generate application URLs¶ Dynamic web applications produce URLs that can change depending on what you are viewing. Pyramid provides flexible, consistent, easy to use tools for generating URLs. When you use these tools to write your application, you can change your configuration without fear of breaking links in your web pages. See also See also Generating Route URLs. Serve static assets¶ Web applications often require JavaScript, CSS, images and other so-called static assets. Pyramid provides flexible tools for serving these kinds of files. You can serve them directly from Pyramid, or host them on an external server or CDN (content delivery network). Either way, Pyramid can help you to generate URLs so you can change where your files come from without changing any code. See also See also Serving Static Assets. Develop interactively¶. When your application has an error, an interactive debugger allows you to poke around from your browser to find out what happened. To use the Pyramid debug toolbar, build your project with a Pyramid cookiecutter. See also See also The Debug Toolbar. Debug with power¶ When things go wrong, Pyramid gives you powerful ways to fix the problem. You can configure Pyramid to print helpful information to the console. The debug_notfound setting shows information about URLs that aren't matched. The debug_authorization setting provides helpful messages about why you aren't allowed to do what you just tried. Pyramid also has command line tools to help you verify your configuration. You can use proutes and pviews to inspect how URLs are connected to your application code. See also See also Debugging View Authorization Failures, Command-Line Pyramid, and p* Scripts Documentation Extend your application¶ Pyramid add-ons extend the core of the framework with useful abilities. There are add-ons available for your favorite template language, SQL and NoSQL databases, authentication services and more. Supported Pyramid add-ons are held to the same demanding standards as the framework itself. You will find them to be fully tested and well documented. See also See also Write your views, your way¶ A fundamental task for any framework is to map URLs to code. In Pyramid, that code is called a view callable. View callables can be functions, class methods or even callable class instances. You are free to choose the approach that best fits your use case. Regardless of your choice, Pyramid treats them the same. You can change your mind at any time without any penalty. There are no artificial distinctions between the various approaches. Here's a view callable defined as a function: Here's a few views defined as methods of a class instead: See also See also @view_config Placement. Find your static assets¶ In many web frameworks, the static assets required by an application are kept in a globally shared location, "the static directory". Others use a lookup scheme, like an ordered set of template directories. Both of these approaches have problems when it comes to customization. Pyramid takes a different approach. Static assets are located using asset specifications, strings that contain reference both to a Python package name and a file or directory name, e.g. MyPackage:static/index.html. These specifications are used for templates, JavaScript and CSS, translation files, and any other package-bound static resource. By using asset specifications, Pyramid makes it easy to extend your application with other packages without worrying about conflicts. What happens if another Pyramid package you are using provides an asset you need to customize? Maybe that page template needs better HTML, or you want to update some CSS. With asset specifications you can override the assets from other packages using simple wrappers. Examples: Understanding Asset Specifications and Overriding Assets. Use your templates¶ In Pyramid, the job of creating a Response belongs to a renderer. Any templating system—Mako, Chameleon, Jinja2—can be a renderer. In fact, packages exist for all of these systems. But if you'd rather use another, a structured API exists allowing you to create a renderer using your favorite templating system. You can use the templating system you understand, not one required by the framework. What's more, Pyramid does not make you use a single templating system exclusively. You can use multiple templating systems, even in the same project. Example: Using Templates Directly. Write testable views¶ When you use a renderer with your view callable, you are freed from needing to return a "webby" Response object. Instead your views can return a simple Python dictionary. Pyramid will take care of rendering the information in that dictionary to a Response on your behalf. As a result, your views are more easily tested, since you don't need to parse HTML to evaluate the results. Pyramid makes it a snap to write unit tests for your views, instead of requiring you to use functional tests. For example, a typical web framework might return a Response object from a render_to_response call: While you can do this in Pyramid, you can also return a Python dictionary: By configuring your view to use a renderer, you tell Pyramid to use the {'a':1} dictionary and the specified template to render a response on your behalf. The string passed as renderer= above is an asset specification. Asset specifications are widely used in Pyramid. They allow for more reliable customization. See Find your static assets for more information. Use events to coordinate actions¶ When writing web applications, it is often important to have your code run at a specific point in the lifecycle of a request. In Pyramid, you can accomplish this using subscribers and events. For example, you might have a job that needs to be done each time your application handles a new request. Pyramid emits a NewRequest event at this point in the request handling lifecycle. You can register your code as a subscriber to this event using a clear, declarative style: from pyramid.events import NewRequest from pyramid.events import subscriber @subscriber(NewRequest) def my_job(event): do_something(event.request) Pyramid's event system can be extended as well. If you need, you can create events of your own and send them using Pyramid's event system. Then anyone working with your application can subscribe to your events and coordinate their code with yours. Example: Using Events and Event Types. Build international applications¶ Pyramid ships with internationalization-related features in its core: localization, pluralization, and creating message catalogs from source files and templates. Pyramid allows for a plurality of message catalogs via the use of translation domains. You can create a system that has its own translations without conflict with other translations in other domains. Example: Internationalization and Localization. Build efficient applications¶ Pyramid provides an easy way to cache the results of slow or expensive views. You can indicate in view configuration that you want a view to be cached: @view_config(http_cache=3600) # 60 minutes def myview(request): # ... Pyramid will automatically add the appropriate Cache-Control and Expires headers to the response it creates. See the add_view() method's http_cache documentation for more information. Build fast applications¶ The Pyramid core is fast. It has been engineered from the ground up for speed. It only does as much work as absolutely necessary when you ask it to get a job done. If you need speed from your application, Pyramid is the right choice for you. Example: Store session data¶ Pyramid has built-in support for HTTP sessions, so you can associate data with specific users between requests. Lots of other frameworks also support sessions. But Pyramid allows you to plug in your own custom sessioning system. So long as your system conforms to a documented interface, you can drop it in in place of the provided system. Currently there is a binding package for the third-party Redis sessioning system that does exactly this. But if you have a specialized need (perhaps you want to store your session data in MongoDB), you can. You can even switch between implementations without changing your application code. Handle problems with grace¶ Mistakes happen. Problems crop up. No one writes bug-free code. Pyramid`provides a way to handle the exceptions your code encounters. An :term:`exception view is a special kind of view which is automatically called when a particular exception type arises without being handled by your application. For example, you might register an exception view for the Exception exception type, which will catch all exceptions, and present a pretty "well, this is embarrassing" page. Or you might choose to register an exception view for only certain application-specific exceptions. You can make one for when a file is not found, or when the user doesn't have permission to do something. In the former case, you can show a pretty "Not Found" page; in the latter case you might show a login form. Example: Custom Exception Views. And much, much more...¶ Pyramid has been built with a number of other sophisticated design features that make it adaptable. Read more about them below. - Advanced Pyramid Design Features - You Don't Need Singletons - Simplify your View Code with Predicates - Stop Worrying About Transactions - Stop Worrying About Configuration - Compose Powerful Apps From Simple Parts - Authenticate Users Your Way - Build Trees of Resources - Take Action on Each Request with Tweens - Return What You Want From Your Views - Use Global Response Objects - Extend Configuration - Introspect Your Application. Similar to Zope, Pyramid applications may easily be extended. If you work within the constraints of the framework, you can produce applications that can be reused, modified, or extended without needing to modify the original application code. Pyramid also inherits the concepts of traversal and declarative security from Zope. Similar to Pylons version 1.0, Pyramid is largely free of policy. It makes no assertions about which database or template system you should use. You are free to use whatever third-party components fit the needs of your specific application. Pyramid also inherits its approach to URL dispatch from Pylons. Similar to Django, Pyramid values extensive documentation. In addition, the concept of a view is used by Pyramid much as it would be by Django. Other Python web frameworks advertise themselves as members of a class of web frameworks named model-view-controller frameworks. The authors of Pyramid do not believe that the MVC pattern fits the web particularly well. However, if this abstraction works for you, Pyramid also generally fits into this class.
https://pyramid.readthedocs.io/en/latest/narr/introduction.html
2018-09-18T16:16:32
CC-MAIN-2018-39
1537267155561.35
[]
pyramid.readthedocs.io
xCAT2 Release Information¶ The following table is a summary of the new operating system (OS), hardware, and features that are added to each xCAT release. The OS and hardware listed in the table have been fully tested with xCAT. For a more detailed list of new function, bug fixes, restrictions and known problems, refer to the individual release notes for a specific release. - RHEL - Red Hat Enterprise Linux - SLES - Suse Linux Enterprise Server - UBT - Ubuntu
https://xcat-docs.readthedocs.io/en/stable/overview/xcat2_release.html
2018-09-18T15:12:02
CC-MAIN-2018-39
1537267155561.35
[]
xcat-docs.readthedocs.io
The Run Macro task will run a macro in the project. Provides an optional argument to the macro. The Macro Argument property provides an optional argument to the macro. Property Type: Static Default Value: null The value or result of a rule, placed in the Macro Argument property, is passed into the special variable Current Macro Argument (see Info: Special Variables). Using the Macro Argument property allows the same macro to be used, but with differing outcomes depending on when it is run. Ensure the property is a static property (It will display the gray orb alongside the property name) The default value of the static property can be changed by typing the required value directly into the property field: A rule can also be built for this property by changing the type to dynamic. See How To: Change A Static Property To A Dynamic Property. Value set in Form Designer. Static properties can be made Dynamic by double clicking the gray radio button. When this task is added the properties are static by default. See How To: Change A Static Property To A Dynamic Property to enable rules to be built on these properties.
http://docs.driveworkspro.com/Topic/RunMacro
2018-09-18T16:44:52
CC-MAIN-2018-39
1537267155561.35
[]
docs.driveworkspro.com
If you have created a default user (DFTUSR) for LANSA Web, all users who pass the Web Server validation will use the anonymous user profile to access the LANSA System. Alternatively if you are using individual user profiles in the Web Server validation list, you can map them to specific IBM i user profiles. The mapped IBM i user profile will be used to access the LANSA system. You will need to ensure that the IBM i user profiles have proper authority to LANSA and have the correct library lists, etc. For more details refer to LANSA Web and IBM i User Profiles. For details of how to register the users, refer to Step 4. Register Web Users in Partial User Authentication.
https://docs.lansa.com/14/en/lansa008/content/lansa/insem_060.htm
2018-09-18T16:09:00
CC-MAIN-2018-39
1537267155561.35
[]
docs.lansa.com
Scheduling Calls This topic provides instructions for managing outbound HTTP calls in Scheduler for Pivotal Cloud Foundry (PCF). Manage Calls You can use Scheduler for PCF to schedule execution of HTTP calls to external HTTP services. See the following sections to learn more about creating, running, and scheduling calls and viewing call history. Note: If you want to use the Cloud Foundry Command Line Interface (cf CLI) for managing Call You can create a call by running the cf create-call APP-NAME CALL-NAME URL command, where: APP-NAMEis the app you want to create a call for. CALL-NAMEis the name for your call. URLis the URL to execute a HTTP POST call against. Execute a Call You can execute a call manually by running the cf run-call CALL-NAME command. This is often useful to test the configuration of a call prior to scheduling it for recurring execution. See the following example: $ cf run-call my-call Enqueuing call my-call for app my-app in org my-org / space my-space as [email protected]... OK Schedule a Call You can schedule a call to execute at any time using a schedule expression. Scheduler for PCF requires Cron expressions in the MIN HOUR DAY-OF-MONTH MONTH DAY-OF-WEEK format. For example, to execute a call at noon every day, run the following command: $ cf schedule-call my-call "0 12 ? * * " A single call can have multiple schedules. Each schedule has a GUID to distinguish it from similar schedules. View Calls You can use the cf CLI to list all calls in a space by running cf calls. See the following example: $ cf calls Listing calls for org my-org / space my-space as [email protected]... Call Name App Name URL my-call my-app OK View Schedules for Calls You can review schedules for all calls in a space by running cf call-schedules. See the following example: $ cf call-schedules Getting scheduled calls for org my-org / space my-space as [email protected]... App Name: my-app my-call 2b69e0c2-9664-46bb-4817-54afcedbb65d 0 12 ? * * OK View Call History You can review call history by running cf call-history CALL-NAME. See the following example: $ cf call-history my-call Getting scheduled call history for my-call in org my-org / space my-space as [email protected]... 1 - 1 of 1 Total Results Execution GUID Execution State Scheduled Time Execution Start Time Execution End Time Exit Message d288a4ba-e0bc-48c9-969c-6ee79e380b20 SUCCEEDED Mon, 16 Oct 2017 12:10:55 UTC Mon, 16 Oct 2017 12:10:55 UTC Mon, 16 Oct 2017 12:10:55 UTC 201 - Created Delete a Call You can delete a call by running cf delete-call CALL-NAME. See the following example: $ cf delete-call my-call Really delete the call my-call with url and all associated schedules and history?> [yN]:y OK Delete a Call Schedule You can delete a specific schedule by running cf delete-call-schedule SCHEDULE-GUID, where SCHEDULE-GUID is the GUID found in the output of the cf call-schedules command. See the following example: $ cf delete-call
https://docs.pivotal.io/pcf-scheduler/1-1/using-calls.html
2018-09-18T15:42:18
CC-MAIN-2018-39
1537267155561.35
[]
docs.pivotal.io
Hibernate.orgCommunity Documentation 4.2.21.Final 2015-10-23., needed to build applications using the native Hibernate APIs including defining metadata in both annotations as well as Hibernate's own hbm.xml format.. Table of Contents This tutorial is located within the download bundle under basic/. Objectives the values of these properties are all specific to running H2 in its in-memory mode. connection.pool_size is used to configure the number of connections in Hibernate's built-in connection pool. The The hbm2ddl.auto property enables automatic generation of database schemas directly into the database. Finally, add the mapping file(s) for persistent classes to the configuration. The resource attribute of the mapping element causes Hibernate to attempt to locate that mapping as a classpath resource, using., using the save method. Hibernate now @Id @GeneratedValue(generator="increment") @GenericGenerator(name="increment", strategy = "increment") public Long getId() { return id; } @javax.persistence.Id marks the property which defines; } As”. JPA, however, defines a different bootstrap process that uses its own configuration file named persistence.xml. This bootstrapping process. Applications use this name to reference the configuration when obtaining an javax.persistence.EntityManagerFactory reference. The settings defined in the properties element are discussed in that the persistence unit name is org.hibernate.tutorial.jpa, which matches similar to Example 2.5, “Saving entities”. An javax.persistence.EntityManager interface is used instead of a org.hibernate.Session interface.”..).
http://docs.jboss.org/hibernate/orm/4.2/quickstart/en-US/html_single/
2016-09-25T01:37:42
CC-MAIN-2016-40
1474738659680.65
[]
docs.jboss.org
You can enable a feature to display Quick Buttons on the Main screen, eight Quick Buttons appear. To display the Quick Buttons, choose → on the main menu. The Configure screen will appear. Click Interface to expand it, then click Chat Window. Check the Show quick buttons box and click . The buttons may be customized to your liking for performing often-used IRC commands. To customize the Quick Buttons, choose → on the main menu. The Configure screen will appear. Click Interface to expand it, then click Quick Buttons to display the Quick Buttons screen. There are 8 default Quick Buttons. Click on an entry to change it or use the buttons at the right side of the list to add or remove Quick Buttons. The Button Name column is the name that will appear on the button in the Main screen. Keep the names short. The Button Action column is the action that will be performed when you click the Quick Button. Tips for creating actions are given on the screen. Click to complete the changes. Example: - Button Name: Msg Button Action: (there is a space after Msg %u ) %u To use this button in the Main screen, click on a nickname in the Nick Panel, then click the button. /MSG will appear in the Input Line followed by the chosen nickname. Type a message you want to send to that person and press Enter. The message will be sent to the user. Only that user will see the message.
https://docs.kde.org/stable5/en/extragear-network/konversation/quick-buttons.html
2016-09-25T00:18:06
CC-MAIN-2016-40
1474738659680.65
[array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object) array(['quickbuttons_screen.png', 'The Quick Buttons screen'], dtype=object) ]
docs.kde.org
An Act to repeal 20.235 (1) (ke); to amend 20.235 (1) (b) (title), 20.235 (1) (fe), 20.235 (1) (ff), 20.235 (1) (ke), 20.235 (1) (km), 39.30 (3) (c), 39.435 (title) and 39.435 (1); and to repeal and recreate 39.30 (title) of the statutes; Relating to: changing the names, Wisconsin higher education grants, and tuition grants, to Wisconsin grants.
https://docs.legis.wisconsin.gov/2013/proposals/sb406
2016-09-25T00:18:40
CC-MAIN-2016-40
1474738659680.65
[]
docs.legis.wisconsin.gov
Installation and Configuration Guide Local Navigation Search This Document Planning a BlackBerry Pushcast Software installation During the installation process in an environment that includes one server, you can install all of the BlackBerry Pushcast Software components and services on the server. Before you install all of the components on one server, you must determine whether the server can manage the amount of resources that the BlackBerry Pushcast Software requires. Next topic: System requirements Was this information helpful? Send us your comments.
http://docs.blackberry.com/id-id/admin/deliverables/32550/Planning_a_BB_pushcast_installation_1553481_11.jsp
2013-05-18T17:21:10
CC-MAIN-2013-20
1368696382584
[]
docs.blackberry.com
Help Center Local Navigation Create a content category You can create content categories to organize content in the Chalk™ Pushcast™ Software. - In the Chalk™ Pushcast™ Console, click Content > Manage Categories. - Click Create Category. - In the Name field, type a name for the content category. - In the Description field, type a description for the content category. - To activate the content category in the content catalog, select the Publish this category check box. If you do not want to activate the content category at this time, you can activate the content category later. - Click Save. Next topic: Change the details for a content category Previous topic: View the settings for a content category Was this information helpful? Send us your comments.
http://docs.blackberry.com/zh-cn/admin/deliverables/22986/Create_a_content_category_859484_11.jsp
2013-05-18T17:52:25
CC-MAIN-2013-20
1368696382584
[]
docs.blackberry.com
Help Center Local Navigation Chalk Pushcast Software security The Chalk Pushcast Software implements RBAC and instance level access control to control which tasks Chalk Pushcast Software users can perform using the Chalk Pushcast Software. Depending on your organization, your might permit an author to create all content and several administrators to assign content to specific Chalk Pushcast Software users and groups. Alternatively, you might permit an administrator to create and assign all content to all users. If you implement both RBAC and instance level access control, you can also permit a centralized or distributed installation of the Chalk Pushcast Software. For example, you could have a single installation of the Chalk Pushcast Software for all the departments in your organization or you could install the Chalk Pushcast Software in each department. Was this information helpful? Send us your comments.
http://docs.blackberry.com/zh-cn/admin/deliverables/23049/Chalk_Pushcast_software_security_926014_11.jsp
2013-05-18T17:28:11
CC-MAIN-2013-20
1368696382584
[]
docs.blackberry.com
You will encounter several different types of attributes. Each type has a different "edit style" in an Inspector window. Note that all attribute changes are undoable by choosing Undo Attribute Change from the Edit menu. Selecting and inspecting objects - Select an object or multiple objects in your patcher window by Shift-clicking or clicking and dragging an outline around the objects. - Click the Inspector button in the patcher toolbar. When you open the object inspector on multiple objects, only the attributes shared by the objects are shown.If you have selected objects of different types, the Inspector window will appear with the title multiple objects inspector. In most cases, this means only the common box attributes will appear in the inspector. Getting information about an Inspector setting - Position your cursor over the Setting column of any row in the window to see the setting's info button. Hover over the button to see a brief description of the setting. Editing font attributes - You can change the font name, size and font fact by editing font attributes for an object. Editing color attributes - Click in the Value column to open the Color palette - You can use the Color palette to select a color or open the Color Picker to apply your own color palette. Working with Attribute Default Values - Select an object or objects in the Patcher window. - Click in any column in an attribute's row in the Inspector to select it. - Click on the Modify Selected Item in the Inspector toolbar and select an option from the pull-down menu. When you release the mouse button, the selected option will be applied to your object. You can use the Modify Selected Item pull-down menu to perform several useful tasks when you are editing attributes - Choosing Revert Value from the Modify Selected Item pull-down menu will return the selected attribute to its previous value. - Choosing Set to Default Value from the Modify Selected Item pull-down menu will set the selected attribute to its default value. - Max lets you use Patcher-level formatting to apply consistent color schemes to the Max patching environment. You can save these files as Templates and use them as you patch. You can also set a Template to be your the default Template that will be used each time you launch Max. - If you are working with frozen attribute values, you can use choose Set to Frozen Value from the Modify Selected Item pull-down menu to return the selected attribute to the setting you used when you originally froze the attribute. This menu item will only be enabled if the attribute has a frozen value and the current value deviates from that frozen value. Keeping Track of Objects and their Attributes When you're working with multiple objects that have lots of attributes you want to keep track of, it's sometimes difficult to keep track of which of the objects in your patch you're currently working with. Clicking on the Show Object button in the Inspector toolbar will temporarily color the object you're working on for easy identification. Adding an attrui Object to Your Patch - Select an object in the Patcher window. - Click in any column in an attribute's row in the Inspector to select it. - Click on the Make Attribute in Patcher button in the Object Inspector toolbar. A connected attrui object will be automatically added to your patch that displays the selected attribute.
https://docs.cycling74.com/max8/vignettes/attributes_inspecting
2020-02-17T09:39:16
CC-MAIN-2020-10
1581875141806.26
[array(['/static/max8/images/e0fce366a73bd428fc743e6663f92e67.png', None], dtype=object) array(['/static/max8/images/2181b8464a54560094f44f779a4216b8.png', None], dtype=object) array(['/static/max8/images/b5c96ee9495c77db1cf23e8b0e95804c.png', None], dtype=object) array(['/static/max8/images/94d463a2c4904b6bca7d5bd04076dc8e.png', None], dtype=object) array(['/static/max8/images/784278eaede45a6746f4415027b6d5ab.png', None], dtype=object) array(['/static/max8/images/ffe394c336257b9f9fa0ac74892e831d.png', None], dtype=object) array(['/static/max8/images/efcf337c592dfc23f4d8b20561c6c63b.png', None], dtype=object) array(['/static/max8/images/193b0bae9763f3171b47a9499316e26f.png', None], dtype=object) array(['/static/max8/images/7968a535e616fd47ea7c1f9ae78278c7.png', None], dtype=object) array(['/static/max8/images/0d5b096bb2640b8f023c140a66fdf611.png', None], dtype=object) array(['/static/max8/images/5b1803dac893429405bb5e12989971e6.png', None], dtype=object) array(['/static/max8/images/064e8d352746e9bad138c856d112def4.png', None], dtype=object) ]
docs.cycling74.com
Why Should Small Businesses Use Social Media Marketing? You may have heard a lot of buzz about social media marketing lately, but do you know what it is? Many companies are now using it to promote their products and services, and therefore are building a stronger and larger business. With the popularity of any social media site today, real-time conversations are happening every day that define your market and brand – whether you choose to participate or not. Don’t get left behind: those real-time conversations can be harnessed, monitored and measured to drive highly relevant conversations across multiple channels. Creating a social media marketing strategy is more than just creating a Facebook page or a Twitter page. You want return on your investment. You decide the engagement strategy needed which results in sales. The engagement strategies generating the most return are marketing engagement and servicing engagement. Marketing engagement relies on brand management and awareness while connecting with customers. Servicing engagement places the focus on troubleshooting problems. Having said that, there are lots of reasons as to why small businesses should be using social media for marketing and business engagement purposes. Here are six compelling reasons: 1. Branding Social Media Marketing can create a recognisable identity for your product or service. This is extremely important for a small business. Social Media tools can get the word out about your brand in a way that promotes online conversation and creates buzz. 2. Word of mouth marketing Exposure is a key to growing your small business. Word of mouth may not generate business but with social media, you can create buzz around your business and brand. More than 80% of online marketing firms agree social media engagement is based upon social interaction between the customer and the business. 3. Reputation management Social Media tools let you keep an eye on what other people and sites are saying about your name, company, or brand online. You can then use the insights to fix any problems, if need be. You could use forums and message boards to answer questions professionally, honestly, and correctly, which will earn you respect as an expert in your niche. People will look to you for answers. 4. Find out what works and what doesn't Your business does not move forward unless you understand your prior faults. Social media gives you a chance to look at your prior engagement year over year, to find out what worked and what did not work. By doing so, you can review your social media metric tools and engagement score. 5. Helps with search engine rankings Social media also helps you move up in the search engine rankings because of links. Many social news sites and social networking sites have “follow” links in their profile pages. These “follow” links can provide your online properties with a higher ranking on search engines such as Bing. 6. Cost-effective marketing and advertising alternative. If you've used social media to support your business, feel free to share your experience in the comments section.
https://docs.microsoft.com/en-us/archive/blogs/keep_your_business_moving/why-should-small-businesses-use-social-media-marketing
2020-02-17T10:53:13
CC-MAIN-2020-10
1581875141806.26
[]
docs.microsoft.com
Thanks for the (Webcast) memories... on daylight saving time and time zones This morning we reprised our daylight saving time webcasts with a Webcast on (wait for it) "Preparing for Daylight Saving Time." We presented an overview of information on Microsoft products and resources available to help businesses and individuals prepare for the coming changes this fall in North America and around the world to daylight saving time and time zone changes. The Webcast will be available for online viewing in the next couple of days. I would like to thank the many attendees we had today and to everyone from Microsoft on their participation in the LiveMeeting today. Thanks to my co-presenters, Rich, Will, Elizabeth and Sophia (as pictured here - I was behind the camera phone), and shout out to Steve, Joel, Ronna, Jim, Sue, Shannon, Tim, Alon, Keith and the many people who assisted on our tech chat. We have a technical web chat coming up on September 24th - watch the Webcast page for more details. For more details, please visit. Tags: Microsoft, Daylight Saving Time, Daylight Savings Time,DST, 638,405; 915,153; 1,750,000+
https://docs.microsoft.com/en-us/archive/blogs/mthree/thanks-for-the-webcast-memories-on-daylight-saving-time-and-time-zones
2020-02-17T11:05:50
CC-MAIN-2020-10
1581875141806.26
[]
docs.microsoft.com
AD Forest Recovery - Devising an AD Forest Recovery Plan Applies To: Windows Server 2016, Windows Server 2012 and 2012 R2, Windows Server 2008 and 2008 R2. Next Steps - AD Forest Recovery - Prerequisites - AD Forest Recovery - Devising a custom forest recovery plan - AD Forest Recovery - Identify the problem - AD Forest Recovery - Determine how to recover - AD Forest Recovery - Perform initial recovery - AD Forest Recovery - Procedures - AD Forest Recovery - Frequently Asked Questions - AD Forest Recovery - Recovering a Single Domain within a Multidomain Forest - AD Forest Recovery - Forest Recovery with Windows Server 2003 Domain Controllers Feedback
https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/manage/ad-forest-recovery-devising-a-plan
2020-02-17T11:24:52
CC-MAIN-2020-10
1581875141806.26
[]
docs.microsoft.com
Requesting Access Rights to an Object When you open a handle to an object, the returned handle has some combination of access rights to the object. Some functions, such as CreateSemaphore, do not require a specific set of requested access rights. These functions always try to open the handle for full access. Other functions, such as CreateFile and OpenProcess, allow you to specify the set of access rights that you want. You should request only the access rights that you need, rather than opening a handle for full access. This prevents using the handle in an unintended way, and increases the chances that the access request will succeed if the object's DACL only allows limited access. Use generic access rights to specify the type of access needed when opening a handle to an object. This is typically simpler than specifying all the corresponding standard and specific rights. Alternatively, use the MAXIMUM_ALLOWED constant to request that the object be opened with all the access rights that are valid for the caller. Note The MAXIMUM_ALLOWED constant cannot be used in an ACE. To get or set the SACL in an object's security descriptor, request the ACCESS_SYSTEM_SECURITY access right when opening a handle to the object.
https://docs.microsoft.com/en-us/windows/win32/secauthz/requesting-access-rights-to-an-object
2020-02-17T10:45:15
CC-MAIN-2020-10
1581875141806.26
[]
docs.microsoft.com
These instructions apply to Autodesk® AutoCAD® Civil 3D 2016 2013 and above, and describe how to load georeferenced Nearmap imagery using Web Map Service (WMS). Screen images shown in these instructions are from AutoCAD Civil 3D 2016. Please note that only certain versions of AutoCAD (Map 3D and Civil 3D) support this functionality. Rather than import a single image of a small area, WMS allows AutoCAD to request the imagery directly from the Nearmap server in a variety of map projections. This guide covers the following topics: ... In addition to this document, you can view our video on How to add a WMS connection in Autodesk.watch our video: Authentication To consume Nearmap imagery via WMS, you must authenticate using API Key Authentication. The WMS URL you enter into your application must include your API Key. The WMS Links section of the the WMS Integration documentation describes how to copy the correct WMS URL, including your API Key, to the clipboard of your computer. Once you have done that, you can conveniently paste it into your application as required (see below). Setting up a Coordinate System Before loading Nearmap WMS imagery, set up a coordinate system. In the Civil 3D Workspace view, from the Toolspace menu, click the Settings tab. Right-click on the drawing name and select Edit Drawing Settings. If you want to take measurements from Nearmap imagery, select a distance-based coordinate system - for example, UTM zones or local grid systems. Please refer to Natively Supported Coordinate Systems. - Click Apply, then OK. In the Planning and Analysis workspace view, click Connect. In the Data Connections by Provider window, click Add WMS Connection. In the Add a new connection panel, at Connection Name, enter Nearmap. At Server name or URL, enter the WMS URL, including your API Key: US: Australia & New Zealand:. Change the Server CS Code to your datum. For example, in the US select NAD83 UTM and your zone. If you select EPSG26910 and then you hover over it, you will see it's NAD83 / UTM zone 10. Check Combine into one layer, and give the layer a name. Then click Add to Map. The image layer loads and displays. As you zoom in, higher-resolution imagery loads on demand. Configuring Measurements If you would like to take measurements with Nearmap imagery, follow these additional steps: ...
https://docs.nearmap.com/pages/diffpages.action?originalId=25428370&pageId=5996554
2020-02-17T09:09:28
CC-MAIN-2020-10
1581875141806.26
[]
docs.nearmap.com
This topic explains how to check custom requirements or tailor existing rules to your unique needs by either modifying existing rules or by creating custom rules. Sections include:: All of the above steps can be controlled by menu options in C++test; you do not need to perform any actual file manipulations. C++test completely manages all rule components in these steps.: The RuleWizard GUI will then open. The RuleWizard User's Guide (accessible by choosing Help> Documentation in the RuleWizard GUI) contains information on how to modify, create, and save. See Configuring Test Configurations and Rules for Policies. Before you can check custom coding rules that were designed in RuleWizard, you need to deploy them. To deploy custom rules if you are not using Team Server:.
https://docs.parasoft.com/plugins/viewsource/viewpagesrc.action?pageId=27507585
2020-02-17T09:00:02
CC-MAIN-2020-10
1581875141806.26
[]
docs.parasoft.com
Estimator¶ Introduction¶ The estimator is used to calculate an estimate of the attitude and angular velocity of the multirotor. It is assumed that the flight controller is mounted rigidly to the body of the aircraft (perhaps with dampening material to remove vibrations from the motors), such that measurements of the on-board IMU are consistent with the motion of the aircraft. Due to the limited computational power on the embedded processor, and to calculate attitude estimates at speeds up to 8000Hz, a simple complementary filter is used, rather than an extended Kalman filter. In practice, this method works extremely well, and it used widely throughout commercially available autopilots. There are a variety of complementary filters, but the general theory is the same. A complementary filter is a method that combines low and high frequency data (complementary in frequency bandwidth). It can be used to fuse the measurements from a gyroscope, accelerometer and sometimes magnetometer to produce an estimate of the attitude of the MAV. Complementary Filtering¶ The idea behind complementary filtering is to try to get the "best of both worlds" of gyros and accelerometers. Gyros are very accurate in short spaces of time, but they are subject to low-frequency drift. Accelerometers don't drift in the long scheme of things, but they experience high-frequency noise as the MAV moves about. So, to solve these problems, the complementary filter primarily propagates states using gyroscope measurements, but then corrects drift with the accelerometer, which is a partial source of attitude measurements. In a general sense, it is like taking a high-pass filtered version of gyroscope measurements, and a low-pass filtered version of accelerometers, and fusing the two together in a manner that results in an estimate that is stable over time, but also able to handle quick transient motions. For an excellent review of the theory of complementary filtering, consult Mahony's Nonlinear Complementary Filtering on SO(3) paper1. Attitude Representation¶ There are a number of ways to represent the attitude of a MAV. Often, attitude is represented in terms of the Euler angles yaw, pitch and roll, but it can also be represented in other ways, such as rotation matrices, and quaternions. Euler Angles¶ Euler angles represent rotations about three different axes, usually, the z, y, and x axes in that order. This method is often the most easy for users to understand and interpret, but it is by far the least computationally efficient. To propagate euler angles, the following kinematics are employed: Note the large number of trigonometric functions associated with this propagation. In a complementary filter, this will be evaluated at every measurement, and the non-linear coupling between \omega and the attitude becomes very expensive, particularly on embedded processors. Another shortcoming of euler angles is known as "gimbal lock". Gimbal lock occurs at the "singularity" of the euler angle representation, or pitched directly up or down. The problem occurs because there is more than one way to represent this particular rotation. There are some steps one can take to handle these issues, but it is a fundamental problem associated with using euler angles, and motivates the other attitude representations. Rotation Matrix¶ Rotation matrices, are often used in attitude estimation, because they do not suffer from gimbal lock, are quickly converted to and from euler angles, and because of their simple kinematics. where \lfloor\omega\rfloor is the skew-symmetric matrix of \omega, and is related to calculating the cross product. This propagation step is linear with respect to the angular rates, which simplifies calculation significantly. A rotation matrix from the inertial frame to body frame can be constructed from euler angles via the following formula: and converting back to euler angles is done via the following; Quaternions¶ Quaternions are a number system which extends complex numbers. They have four elements, commonly known as w, x, y, and z. The last three elements can be though of as describing an axis, \beta about which a rotation occurred, while the first element, w can be though of as describing the amount of rotation \alpha about that axis. (see eq~\ref{eq:euler_to_axis_angle}). While this may seem straight-forward, quaternions are normalized so that they can form a group. (That is, a quaternion multiplied by a quaternion is a quaternion), so they end up being really difficult for a human being to interpret just by looking at the values. However, they provide some amazing computational efficiencies, most of which comes from the following special mathematics associated with quaternions. First, just the definition of a quaternion: and second, some formulas to convert to and from euler angles. The quaternion group is "closed" under the following operation, termed quaternion multiplication. To take the "inverse" of a quaternion is simply to multiply s or v by -1 and to find the difference between two quaternions, just quaternion multiply the inverse of one quaternion with the other. However, the most important aspect of quaternions is the way we propagate dynamics. where q_\omega is the pure quaternion created from angular rates. What this means is that, like rotation matrices, quaternion dynamics are linear with respect to angular rates, as opposed to euler angles, which are non-linear, and they take less computation than rotation matrices because they have fewer terms. Casey et al. [Casey2013] performed a study comparing all three of the above representations, and found that complementary filters using an euler angle representation took 12 times longer to compute on average than a quaternion-based filter. Quaternions were also about 20% more efficient when compared with rotation matrices. For these reasons, ROSflight uses quaternions in its filter. Derivation¶ ROSflight implements the quaternion-based passive "Mahony" filter as described in this paper 1. In particular, we implement equation 47 from that paper, which also estimates gyroscope biases. A Lyuapanov stability analysis is performed in that paper, in which it is shown that all states and biases, except heading, are globally asymptotically stable given an accelerometer measurement and gyroscope. The above reference also describes how a magnetometer can be integrated in a similar method to the accelerometer. That portion of the filter is omitted here due to the unreliable nature of magnetometers on-board modern small UAS. Passive Complementary Filter¶ The original filter propagates per the following dynamics: where \textrm{p}\left(\cdot\right) creates a pure quaternion from a 3-vector. The term \wfinal is a composite angular rate which consists of the measured angular rates, \bar{\omega}, the estimated gyroscope biases, \hat{b}, and a correction term calculated from another measurement of attitude (usually the accelerometer), \werr. The constant gains k_p and k_Is are used in determining the dynamics of the filter. $$ \begin{equation}\label{eq:q_omega}\tag{2} \wfinal = \bar{\omega} - \hat{b} + k_P\werr \end{equation} $$ The correction term \werr can be understood as the error in the attitude as predicted by another source (e.g., the accelerometer). To calculate \werr the quaternion describing the rotation between the accelerometer estimate and the z-axis of the inertial frame (i.e., where gravity should be), \qmeas, is first calculated Next, the quaternion error between the estimate \qhat and the accelerometer measure \qmeas is calculated. \begin{equation} \qtilde = \qmeas^{-1} \otimes \qhat = \begin{bmatrix} \tilde{s}\ \vtilde \end{bmatrix} \end{equation} Finally, \qmeas is converted back into a 3-vector per the method described in eq. 47a of Mahony2007 1. \begin{equation} \werr = 2\tilde{s}\vtilde \end{equation} Both the attitude quaternion and bias dynamics can be integrated using standard Euler integration, requiring that the resulting quaternion is re-normalized. Modifications to Original Passive Filter¶ There have been a few modifications to the passive filter described in Mahony2007 1, consisting primarily of contributions from Casey2013. Firstly, rather than simply taking gyroscope measurements directly as an estimate of \omega, a quadratic polynomial is used to approximate the true angular rate from gyroscope measurements to reduce error. In Casey2013, this process was shown to reduce RMS error by more than 1,000 times. There are additional steps associated with performing this calculation, but the benefit in accuracy more than compensates for the extra calculation time. The equation to perform this calculation is shown in Eq.\eqref{eq:quad_approx}. where \omega(t_{n-x}) is the gyroscope measurement of the angular rate x measurements previous. The second modification is in the way that the attitude is propagated after finding \dot{\hat{q}}. Instead of performing the standard euler integration we use an approximation of the matrix exponential. The matrix exponential arises out of the solution to the differential equation \dot{x} = Ax, namely $$ \begin{equation} x(t) = e^{At} x(0) \end{equation} $$ and the discrete-time equivalent $$ \begin{equation} x(t_{n+1}) = e^{hA}(t_n) \end{equation} $$ This discrete-time matrix exponential can be approximated by first expanding the matrix exponential into the infinite series and then grouping odd and even terms from the infinite series into two sinusoids, and results in the following equation for the propagation of the quaternion dynamics where \lfloor\w\rfloor_4 is the 4x4 skew-symmetric matrix formed from \w External Attitude Measurements¶ Using the ROSflight estimator with gyro measurements only will quickly drift due to gyro biases. The accelerometer makes the biases in p and q observable and provides another measurement of pitch and roll. To make yaw observable, an external attitude measurement can be provided to the estimator which is used in much the same way as the accelerometer. Instead of as outlined above for accelerometer updates, the correction term \werr can be calculated as $$ \begin{equation} \werr = k_\text{ext}\sum_{i=1}^3 R(\hat{q})^\top e_i\times \bar{R}^\top e_i \end{equation} $$ where k_\text{ext}= F_s^\text{IMU} / F_s^\text{ext} is the ratio of the IMU sample rate to the external attitude sample rate. In our implementation, whenever an external attitude measurement is supplied, if there was a \werr calculated from the accelerometer, it is overwritten by the above calculation for an external attitude update. Also note that the gain k_P associated with an external attitude can be much higher if we trust the source of the external attitude measurement. Tuning¶ The filter can be tuned with the two gains k_P and k_I. Upon initialization, k_P and k_i are set very high, so as to quickly cause the filter to converge upon approprate values. After a few seconds, they are both reduced by a factor of 10, to a value chosen through manual tuning. A high k_P will cause sensitivity to transient accelerometer errors, while a small k_P will cause sensitivity to gyroscope drift. A high k_I will cause biases to wander unnecessarily, while a low k_I will result in slow convergence upon accurate gyroscope bias estimates. These parameters generally do not need to be modified from the default values. Implementation¶ The entire filter is implemented in float-based quaternion calculations. Even though the STM32F10x microprocessor does not contain a floating-point unit, the entire filter has been timed to take about 370\mus. The extra steps of quadratic integration and matrix exponential propagation can be ommited for a 20\mus and 90\mus reduction in speed, respectively. Even with these functions, however, this is sufficiently short to run at well over 1000Hz, which is the update rate of the MPU6050 on the naze32. Control is performed according to euler angle estimates, and to reduce the computational load of converting from quaternion to euler angles (See Equation\eqref{eq:euler_from_quat}), a lookup table approximation of atan2 and asin are used. The Invensense MPU6050 has a 16-bit ADC and an accelerometer and gyro onboard. The accelerometer, when scaled to \pm4g, has a resolution of 0.002394 m/s^2. The lookup table method used to approximate atan2 and asin in the actual implementation is accurate to \pm 0.001 rad. Given the accuracy of the accelerometer, use of this lookup table implementation is justfied. The C-code implementation of the estimator can be found in the file src/estimator.c.
https://docs.rosflight.org/algorithms/estimator/
2020-02-17T11:30:30
CC-MAIN-2020-10
1581875141806.26
[]
docs.rosflight.org
Templates Splynx has set of templates that are used in different parts of the system. Email messages, Invoice PDFs, SMS messages, Document templates etc. Categories of templates - Customer portal – messages sent to customer portal - Invoice PDF – Invoices (customer billing) - Mail – messages sent to email - SMS – messages sent to SMS - Documents – Documents (located on customer) - Cards – Generation cards (prepaid, refill) - Payment calendars – payment calendars (customer billing) - Payment receipts – receipts for payments (customer billing) - Request PDF – proforma invoices – requests (customer billing) - Reminder mail – reminder mail notification (customer billing) - Reminder SMS – reminder sms notification (customer billing) - Finance exports – Export invoices, requests, payments on Finance part Add a new template How to add a new template: Edit When you edit or change the template, Splynx displays you HTML editor: You can always delete the template: Values for templates We recommend to create one test template in customer documents and enter the following values to get list of variables for section of Splynx that you need to use. For example, let's create a test template and dump all variables of customer: And then we can get the list of variables: ["id"]=> string(2) "50" ["billing_type"]=> string(7) "prepaid" ["partner_id"]=> string(1) "4" ["location_id"]=> string(1) "1" ["added_by"]=> string(5) "admin" ["added_by_id"]=> string(1) "1" ["login"]=> string(6) "000050" ["category"]=> string(6) "person" For example, If we need to use the value login of customer inside invoice PDF, just type: some HTML code {{ login }} continues HTML code Please find the list of most used variables here - Variables for templates. There are dump commands for different Splynx sections below: Basic system values: {{ dump(loader.values) }} Customer's information: {{ dump(loader.customer) }} General information: {{ dump(loader.info) }} Customer's services: {{ dump(loader.services) }} Get all Internet services that are Active: {{ dump(loader.getServicesByTypeAndStatus('internet', 'active')) }} Billing information: {{ dump(loader.billing) }} ##### Partner: ```twig {{ dump(loader.partner) }} Transactions: {{ dump(loader.transactions) }} Invoices variables: {{ dump(loader.invoices) }} Invoice items: {% for invoice in loader.getInvoices() %} Invoice {{ invoice.number}} items: {{ dump(invoice.items) }} {% endfor %} {{ dump(loader.invoices) }} Pro-formas: {{ dump(loader.requests) }} Payments: {{ dump(loader.payments) }} Attached documents: Invoices: {{ dump(loader.getAttachedInvoices) }} Pro-formas: {{ dump(loader.getAttachedRequests) }} Payment receipts: {{ dump(loader.getAttachedReceipts) }} Example of usage: {% set attached_invoices = loader.getAttachedInvoices %} {% for current in attached_invoices %} Invoice number: {{ current.number }} - sum: {{ current.total }} {% endfor %} Twig (engine) In all templates we use twig engine, please find the documentation.
https://docs.splynx.com/configuration/system/templates/templates.md
2020-02-17T10:35:05
CC-MAIN-2020-10
1581875141806.26
[array(['http://docs.splynx.com/images/get?path=en%2Fconfiguration%2Fsystem%2Ftemplates%2Fmenu.png', 'Settings'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fconfiguration%2Fsystem%2Ftemplates%2Ftype.png', 'Type of template'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fconfiguration%2Fsystem%2Ftemplates%2Fedit.png', 'Edit template'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fconfiguration%2Fsystem%2Ftemplates%2Fdelete.png', 'Delete template'], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fconfiguration%2Fsystem%2Ftemplates%2Ft1.png', None], dtype=object) array(['http://docs.splynx.com/images/get?path=en%2Fconfiguration%2Fsystem%2Ftemplates%2Fgenerate_doc.png', 'Generate document'], dtype=object) ]
docs.splynx.com
Viewing a Report Graphically Automation Anywhere provides a step-by-step graphical view of all tasks run. To turn on the Visualize view, follow these steps: - Click on the Tools menu, and select Options. - Click on Advanced Settings. - Check the Capture Screenshots While Recording a Task check box. - Click Apply. - Click OK. - To view your task graphically, simply click the bar representing the task using the Task Run report view. If SnapPoints are not supported by the task, a message is displayed to notify you immediately. Using the Visualize Report, you can: - View the number of days that the task has run during a specific date range. - View a specific day and the number of times that a task has run during that day. - Compare your tasks using all of the saved SnapPoints in the task folders (..My Documents\Automation Anywhere\SnapPoints).
https://docs.automationanywhere.com/es-ES/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/using-special-features/viewing-a-report-graphically.html
2022-05-16T09:29:39
CC-MAIN-2022-21
1652662510097.3
[]
docs.automationanywhere.com
Accounting - Introduction - Importing an accounting plan - Accounting accounts - The newspapers - Current operations - Create Various Operations - Create a writing template - Generate a writing from a template - Set up tax positions according to transactions - Extract a writing (with assistant) - Export accounting entries - Enable automatic lettering - Perform manual lettering - Detecting entries. - Perform the accounting cut-off: NPF and FEF - Manage fog mode - Manage overpayment clearances by the client - Create VAT rates - Manage VAT rates for intra-EU orders - Manage the reverse VAT validation - Accounting reports and exports - The recovery - Factoring - The budgets - Cost accounting - The regulations - Capital asset management - Other functionalities Introduction Introduction The accounting application automatically generates accounting entries from accounting documents and pre-configured templates, and allows you to record various transactions in a manually assisted manner. The accounting module allows you to manage analytical postings, and to benefit natively from a management of analytical distribution by third parties, by products, etc. Real-time accounting reports are at your disposal in a few clicks: general ledger, balance, discount slip, newspapers, VAT… The management of accounting periods can be configured per company code, as well as the management of logs and the chart of accounts. In addition to automatic tax management, you can also benefit from customized tax position settings based on transactions. The application also manages recovery and factors. From a banking point of view, the bank reconciliation between the lines of your bank statements and your accounting entries is automatic. The processing of SEPA direct debits and credit transfers is fully automated, and includes the processing of bank or cheque rejections. Axelor manages the EBICS TS banking communication protocol. Writings : Postings: Displays the different accounting postings. Writing lines: Displays the different writing lines. Template entries: Allows you to generate entries from a template. Payments : Payment wizard : Allows you to search by third party and payment method for invoices and payment plans. Supplier invoice to pay: Displays the supplier invoices to pay. Periodic processing: Reconciliations: Displays the different reconciliations and allows you to create new ones. Lettering: Allows you to let lines be read manually. Clearing of overpayments: Allows you to manage overpayments. Passage in irrecoverable: Allows to recover invoices and deadlines to pass in irrecoverable. Cheque rejects: Validates cheque rejects and displays previous rejects. Declaration of exchanges: Allows to carry out the DEB (Declaration of Exchanges of Goods) or the DES (Declaration of Exchanges of Services). Recovery: Recovery: Allows to carry out a recovery process. Recovery history: Displays the history of customer reminders and allows you to create new reminders. Factoring: Subrogation receipts: Management of subrogation receipts / Notifications: Factor notifications when payment is received. Configuration: Recovery methods : Allows you to create dunning methods with different levels. The methods can be configured (message, standard deadline, minimum amount…) / Recovery levels : Allows you to create different dunning levels with a label / Factor: Allows you to create factors. Bank statement: Bank statement: Allows you to import bank statements and display bank statement lines. Exports/Accounting reports : Refunds : Creating Refunds Cheque remittance slips: Creating cheque remittance slips Accounting reports: Creation of accounting reports Accounting exports: Creation of accounting exports Budget preparation: My budgets: Displays the budgets for which the active user is responsible and allows you to create new ones. All budgets: Displays all budgets and allows you to create new ones. Bank orders : Transfers: SEPA Credit Transfers: Making SEPA Credit Transfers / International Credit Transfers: Making International Credit Transfers Bank to bank transfers: Domestic cash transfers / Cash transfers International Sending a bank order: Sending bank orders Bank orders awaiting signature: Displays bank orders awaiting signature. Bank reconciliations: Bank reconciliations: Perform bank reconciliations Bank Reconciliation Line: Displays the bank reconciliation lines. Accounting: Displays accounting dashboards. Configuration : Financial: Tax positions / Tax years / Accounting periods / Logs: Display accounting logs / Log types: Allows you to create accounting log types / Accounting accounts: Displays accounting accounts / Accounting plan: Displays accounting plan / Accounting account settings / Accounting account types / Taxes : Displays the different taxes and allows you to create new ones and configure the rates and periods of application. Analytics : Analytical journals : Displays analytical journals / Analytical journal types : Displays analytical journal types / Analytical accounts : Allows you to create analytical accounts / Analytical chart of accounts : Displays the analytical chart of accounts / Analytical axis : Allows you to create analytical axes / Written lines. Analytics: Displays analytical writing lines / Analytical distribution model: Allows you to create analytical distribution models. Payment: Payment methods: Displays the different payment methods and allows you to create new ones / Paybox : Access to Paybox / Payment terms : Displays the different payment terms and allows you to create new ones. Reports/Exports: Interbank codes: Creation of interbank codes Writing template: Writing template: Allows you to create accounting writing templates / Writing template types: Allows you to create accounting writing template types. Bank order: File formats of bank orders / Economic reasons (bank order) Importing an accounting plan Importing an accounting plan The chart of accounts is pre-filled, it is advisable to import it by following the procedure:If you wish to modify it beforehand before re-importing it, you can do so from the application configurations by company in Accounting, select the France account plan, then click on the pencil icon. Then download the Zip file from the window that opened and modify it as you wish. You can then re-import it from the same place. Accounting accounts Define a name and a code (specific code of the customer account). Fill in the account type, and indicate the parent account (ex general customer code). You can also check "readable" (to reconcile accounting entries) when it comes to third-party accounts. If you tick the box "Usable for third party balances", the entries related to this accounting account will be used to calculate the balance that appears on a third party’s record. Setting up a company code in accounting terms First method Edit to create a new line. You have the possibility to add a third party account and a supplier account. Second method You must double click on the company. You can then define the accounting accounts, by default for third parties, which will be filled automatically each time a new third party is created. The newspapers Configuring log types You can set up log types, in the module configurations, allowing you to perform filtered searches or cost accounting. Define a name, a code and select a type. Choose a sequence, an automatic label that will be included in the description field of each posting line related to the journal concerned. If you check the "Add part number to description" box, this number will automatically increase following the description. There are also options for payments: you can edit a receipt or authorize payments in excess of the amounts due. It is possible to determine account types and accounting accounts that will be selectable according to the associated journal. Current operations Create Various Operations This involves creating accounting entries manually and will be automatically assigned to the manual O.D. journal. Set a date, choose log,… You can then create posting lines (debit/credit) by entering various information such as the accounting account, amount, third party. Create a writing template This makes it possible to automatically reproduce recurring entries. You must first create a template type, you can choose between amount (when the amount is always the same) or percentage (this then indicates a distribution of the amount of the entry on the different accounts). Then, in the template, enter a name, a type and a log. Then create the lines that will make up your writings. For statistical purposes you can associate the posting lines with an analytical distribution model and/or a product. Generate a writing from a template Select a type and model, then choose a movement date. Then generate the entries using the corresponding button. Set up tax positions according to transactions It is sometimes necessary to manage the equivalence of taxes, especially in the case of export sales. It is mandatory to enter a name and a code. Then select the accounts and/or taxes you wish to replace and their equivalents. The specific mention will be included in the customer file when you assign this tax position to it; this mention can be modified on each customer file. Extract a writing (with assistant) To reverse a validated entry, click on the "Revert" button in the taskbar. The reversal is made on the current date and not on the date of the document for a posting made in fog mode. Export accounting entries Choose a print format, fiscal year and possibly period. All the requested lines according to the requested criteria are displayed when you search for them, and you can export them. Enable automatic lettering You can check one or both boxes depending on whether you want to perform automatic lettering against invoices, payments or both. If one or both boxes are checked, the lettering will be done automatically at the part validation stage. Perform manual lettering Manual lettering can be partial or final. From the list of entries view select the lines you want to read and click on the "Read parts" button. If the lettering is final, a capital letter will appear on the lines and if the lettering is partial it will appear in lower case. Detecting entries. In order to delist an entry, you must open it and go to the "List of credit reconciliations" tab. Double-click in the line and then click on the "Disconnect" button in the right panel. Perform the accounting cut-off: NPF and FEF The accounting cut-off corresponds to the closing of a financial year. In the software, it is an automatic processing to be configured. Click on the "+" to create a new batch and select "Accounting cut-off" as the action. You can then choose two types: NPF (Invoices not received) or FEA (Invoices to be issued). Manage fog mode The fog lists all accounting entries awaiting validation. Activate the fog mode by checking the corresponding button. Manage overpayment clearances by the client You can manage overpayment settlements according to minimum threshold rules. Select a maximum amount to be cleared and set the deadline for copying overpayments. Then click on the "Recover overpayments" button. Create VAT rates The rate must be given a code and a name, and the tax rate must be filled in with an application start date (do not fill in an application end date at creation). The type of tax (debit or collection) will be used for the VAT return. Manage VAT rates for intra-EU orders There are 3 different cases to consider: A customer order for a third of the EU (outside France): In this case, an exempt VAT will be applied instead of the VAT defined on the product. On the invoice, we will see an exempt VAT line (amount = 0), and no VAT accounting line. In addition, the specific mention of this exemption must also be given on the tax equivalence of the tax position. It must appear on the invoice printout. A supplier order for a third party from the EU (outside France): In this case it is necessary to indicate on the tax equivalence of the tax position of the third party that it is a VAT reversal and to choose the intra-Community VAT. EX: VAT Dedicated (A) standard rate and VAT Due Intracom. (A) normal tx. On the invoice we will therefore have two tax lines, the second cancelling the first. We will have two accounting entry lines (Ded VAT on the debit side and Intra VAT on the credit side) for the same amount but on different accounts. Indeed, in the case of intra-EU purchase/sale, it is the recipient country that declares the VAT. It is not invoiced by the seller. To know the amount of the flow, it is therefore necessary to have a double set of entries. A customer order without VAT (case of exemption granted by the DGFPDGE for example) In this case, a tax position can also be used to manage the tax exemption, but it will be necessary to indicate on the tax position that a specific mention per customer will be managed, and to indicate the exemption authorization on the third party. Accounting reports and exports Edit accounting reports Choose the printing format. If necessary, enter a specific period or start and end dates. You can also filter by accounting accounts and by third parties. VAT declaration assistance It is possible to generate 2 VAT reports: on debits and receipts. In both cases, select a fiscal year. For VAT on incoming payments you can sort by accounting account and/or by third party. subledger general ledger Define a period. You can choose different logs and payment methods and sort by accounting accounts and/or third parties. You can choose between general printing or subtotals by date ( "Print Information" tab). analytical balance Define a period. You can choose different journals and analytical journals and payment methods. You can choose between general printing or subtotals by date ( "Print Information" tab). FEC file The system can generate two types of FEC files. One for the Tax Administration, which can only be generated once per fiscal year, the other for sending to a third party accounting software that can be exported several times. Define a period and click on "Search". Your file is then ready to be exported. The recovery Stop the recovery It is possible to stop the collection, for a customer in general, for a particular invoice, for a particular posting or for a particular debit due date. Create a collection method for a particular category of third parties Choose a name and code, and create your method by selecting the dunning levels. You can request manual validation of certain steps. Create a payment plan After selecting the "Monthly payment" type, select the third party concerned. The payment mode, my RIB and the currency are automatically filled in but can still be changed manually. Enter the amount due and the number of terms. By clicking on the "Create schedule lines" button, as many lines as terms are generated. The information can be modified. Passing an invoice as a doubtful debt At the level of the company’s accounting configurations, in the "Doubtful Receivables" tab, you can set the terms of the change to doubtful receivables. This will automate the process and create entries based on the doubtful receivables account set up in the "Accounting" tab. Then start the accounting batch "Suspicious customers" in order to perform mass processing. For unitary processing, there is a "Passage to irrecoverable" button on the Accounting tab" of an invoice. You will need to select a previously configured pattern Factoring Manage factors A factor is a third party type "Factor" its record is created in the same way as for another third party. In your company’s accounting configurations, under the "Debt Collection" tab, select your factor. Create and track a subrogation receipt When you create a new receipt you have 2 options: you can directly retrieve invoices from factorized customers by clicking on the corresponding button or add invoices manually. You can then post your receipt or cancel it. Accounting entries will be generated automatically based on the accounts you have previously configured. You can later change the status to "Receipt paid". The budgets Create a budget The fields Name and Code are mandatory. You must then indicate over which period you want this budget to be active. Once this is done, budget lines must be created. Monitor your budgets You can follow the evolution of a budget by going to it. On each line, an amount is considered by default committed when the purchase order is in the status "Completed". An amount is considered to have been realized when the invoice associated with the purchase order has been broken down Cost accounting Create analytical axes An axis consists of a code and a name and must be associated with an analytical account. Create analytical accounts An analytical account consists of a code and a name, and must be associated with an analytical axis. Create analytical journals You will need to create analytical logs, which will be used to configure your analytical distribution models. This will allow you to specify in which journal the analytical entries will be generated. A log consists of a name and a type (which can be created from the Analytical Log Types menu). Create analytical distribution models Once you have given a name to the model, you must add distribution lines. A line consists of an axis, an analytical account on which the allocation will be made, and the journal in which the postings will be generated. Once your analytical distribution models are created, you can assign them to third parties, or to product/product families. The regulations Manage manual incoming payments Allocation of multi-invoice payments (this is a comment on the payment assistant or on the automatic lettering.) Manage settlement discrepancies In the "Accounting" tab, choose a range of authorized payment amounts (which will be used in negative and positive) and select the accounts dedicated to the payment difference. For payments affected by these differences, postings will be generated automatically. Use the payment creation wizard Here you have access to all purchase and sale invoices that are not yet settled. Pay mass invoices manually It is possible to pay invoices manually in bulk on invoices, in bulk from payment assistants, in bulk from the list view, with or without generating a bank order. Once the generation of the bank order has been done, you must make the link to another feature that explains how to manage bank orders. Capital asset management Configure asset categories and types You must first create asset types: just enter a name and a code. Then you can create asset classes. You must give a name and select a type. You can ensure that postings generated from a purchase invoice for an asset are automatically validated by checking the corresponding button. In the "Accounting information" tab you must enter a journal, an expense account and a depreciation account. Other functionalities Import a bank statement From the "Bank statement" menu entry you can manually import a statement file by selecting the file format and manually initiate a bank reconciliation. Manage customer receivables If you activate the Manage customer receivables option in the application configurations of the Accounting module, you can manage the customer receivables level and set a maximum amount. A default maximum amount of receivable accepted for all customers can be set by default in the sales configurations by company code. The accepted amount of work in process can also be configured per customer in the accounting information in the customer master records. You will also be able to view the amount currently used by each customer. Manage down payments in the form of an invoice As long as this option is not activated, you will find, on your invoice, the button "Save a deposit" which will allow you to save the payment of a deposit. If the option is enabled, to save a down payment, you will first have to click on "Generate invoice" and then choose "Invoice down payment" as the operation choice.
https://docs.axelor.com/abs/5.0/functional/accounting.html
2022-05-16T09:46:46
CC-MAIN-2022-21
1652662510097.3
[]
docs.axelor.com
What is Conversation Manager? What is a conversation? To answer what Conversation Manager is, let’s first define a conversation. A conversation can consist of: - Any number of Interactions - Any number of Channels - Any Length of Time - Related by Context: - Service - State - Task - Extended Data (anything relevant to the Conversation) What is Conversation Manager? Conversation Manager is a contact center solution that creates coherent customer communication in real-time customer engagement applications that span one or more channels such as web, mobile, chat, IVR and voice. In a nutshell, Conversation Manager helps you recognize moments when you can take action to improve the customer experience. Within Conversation Manager, Context Services helps you to recognize the moment and the Business Rules help you to take action. What does Conversation Manager Include? Conversation Manager consists of a flexible context data store, a business rules system, and visualization dashboards. Context Services Contextual awareness refers to knowing who the customer is, what they want, and where they are in this process. Context Services also comes with a tool to manage Service, State and Tasks. Genesys Rules Rules allow simple if-then actions such as, “IF we know that the customer is a frequent user of our self-service tracking, THEN we offer self-service tracking as the first option in the menu.” Journey Timeline The Journey Timeline is a visual timeline representation of the customer journey map, depicting all the touch points of the customer for various services on different channels. Journey Dashboard The Journey Dashboard is a visual representation of key performance indicators, showing rules execution and journey metrics.
https://docs.genesys.com/Documentation/CM/latest/Overview/WhatIsConversationManager
2022-05-16T09:03:07
CC-MAIN-2022-21
1652662510097.3
[]
docs.genesys.com
Managing Users and Roles Users and Roles determine who has access to what data from the API. Roles are defined by regular expressions that determine which hosts the user can see, and what policy outcomes are restricted. Example: Listing Users Request curl --user admin:admin Response { "meta": { "page": 1, "count": 2, "total": 2, "timestamp": 1350994249 }, "data": [ { "id": "calvin", "external": true, "roles": [ "Huguenots", "Marketing" ] }, { "id": "quinester", "name": "Willard Van Orman Quine", "email": "noreply@@aol.com", "external": false, "roles": [ "admin" ] } ] } Example: Creating a New User All users will be created for the internal user table. The API will never attempt to write to an external LDAP server. Request curl --user admin:admin -X PUT -d { "email": "[email protected]", "roles": [ "HR" ] } Response 201 Created } Example: Updating an Existing User Both internal and external users may be updated. When updating an external users, the API will essentially annotate metadata for the user, it will never write to LDAP. Consequently, passwords may only be updated for internal users. Users may only update their own records, as authenticated by their user credentials. Request curl --user admin:admin -X POST -d '{ "name": "Calvin" }' Response 204 No Content Example: Retrieving a User It is possible to retrieve data on a single user instead of listing everything. The following query is similar to issuing GET /api/user?id=calvin, with the exception that the previous query accepts a regular expression for id. Request curl --user admin:admin Response { "meta": { "page": 1, "count": 1, "total": 1, "timestamp": 1350994249 }, "data": [ { "id": "calvin", "name": "Calvin", "external": true, "roles": [ "Huguenots", "Marketing" ] }, ] } Example: Adding a User to a Role Adding a user to a role is just an update operation on the user. The full role-set is updated, so if you are only appending a role, you may want to fetch the user data first, append the role and then update. The same approach is used to remove a user from a role. Request curl --user admin:admin -X POST -d { "roles": [ "HR", "gcc-contrib" ] } Response 204 No Content } Example: Deleting a User Users can only be deleted from the internal users table. Request curl --user admin:admin -X DELETE Response 204 No Content
https://docs.cfengine.com/docs/3.18/examples-enterprise-api-examples-managing-users-and-roles.html
2022-05-16T08:37:15
CC-MAIN-2022-21
1652662510097.3
[]
docs.cfengine.com
Search # upgrade # Abstract x/upgrade is an implementation of a Cosmos SDK module that facilitates smoothly upgrading a live Cosmos chain to a new (breaking) software version. It accomplishes this by providing a BeginBlocker hook that prevents the blockchain state machine from proceeding once a pre-defined upgrade block height has been reached. The module does not prescribe anything regarding how governance decides to do an upgrade, but just the mechanism for coordinating the upgrade safely. Without software support for upgrades, upgrading a live chain is risky because all of the validators need to pause their state machines at exactly the same point in the process. If this is not done correctly, there can be state inconsistencies which are hard to recover from.
https://docs.cosmos.network/master/modules/upgrade/
2022-05-16T08:35:23
CC-MAIN-2022-21
1652662510097.3
[]
docs.cosmos.network
Creating and Editing Templates Contents Rule Templates are created as Projects in the Template Projects node on the Template Development tab of GRAT. Create a new Template Project - Enter a name for the new template project. Template names must be unique within a tenant. Click Next. - Select the type of template you are creating from the drop-down list, or enter the name of a new template type to create. To create an iWD template, select iWD. - Click Finish. The new template project will now appear. Editing and Configuring Rule Templates Once it is created, the rule template appears in the left navigation pane. Expanding the template displays a list of components that can be configured. Double-click the component type to open the appropriate Editor and begin configuring components. Renaming Rule Templates To rename a rule template, just edit its name. A copy of the previously named template remains in the repository. ImportantDuplicate template names are not allowed within tenants, but are allowed in different tenants. Creating such a duplicate name will rename the project, but the name as published in GRAT is set via Project/Properties/Template Properties. Deleting Rule Templates Rule templates can be deleted provided that: - The user has rule template delete permissions, and; - The rule is not used in any rule package. This page was last edited on August 30, 2019, at 20:05.
https://docs.genesys.com/Documentation/GRS/latest/GRATHelp/GRDTCreateEditTemplate
2022-05-16T09:28:55
CC-MAIN-2022-21
1652662510097.3
[]
docs.genesys.com
Diagnosing .NET Core ThreadPool Starvation with PerfView ). What these general symptoms tell you is that you have a 'bottleneck' but it is not CPU. Basically there is some other resource that is needed to service a request, that each request has to wait for, and it is this waiting that is limiting throughput. Now the most common reason for this is that the requests require the resources on another machine (typically a database, but it could be a off-machine cache (redis), or any other 'out of process' resource). As we will see, these common issues are relatively straightforward because we can 'blame' the appropriate component. When we look at a breakdown of what why a request takes so long, we will clearly see that some part (e.g. waiting on a response from a database), takes a long time, so we know that that is the issue. However there is an insidious case, which is that the 'scarce resource' are threads themselves. That is some operation (like a database query), completes, but when it does there is no thread that can run the next step in servicing the request, so it simply stalls, waiting for such a thread to become available. This time shows up as a 'longer' database query, but it also happens any time non-CPU activity happens (e.g. any I/O or delay), so it seems like very I/O operation randomly takes longer than it should. We call this problem ThreadPool starvation, and it is the focus of this article. In theory threadpool starvation was always an potential problem, but as will be explained, before the advent of high scale, asynchronous services, the number of threads needed was more predictable, so the problem was rare. With more high scale async apps, however, the potential for this problem has grown, so I am writing this article to describe how to diagnose if this problem is affecting your service and what to do about it if it is. But before we can do that, some background is helpful What is a ThreadPool? Before we talk about a threadpool, we should back and describe what a Thread is. A Thread is the state needed to execute a sequential program. To a good approximation it is the 'call-stack' of partially executed methods, including all the local variables for each of those methods. The key point is that all code need s thread to 'run-in'. When a program starts it is given a thread, and multi-threaded programs create additional threads which each execute code concurrently with each other. Threads make sense in a world where the amount of concurrency is modest, each thread is doing a complex operation. However some workloads have exactly the opposite characteristics: there are MANY concurrent things happening, and each of them are doing simple things. Since all execution needs a thread, for this workload, it makes sense to reuse the thread, having it execute many small (unrelated) work items. This is a threadpool. It has a very simple API. In .NET it is ThreadPool.QueueUserWorkItem that takes a delegate (method) to run. When you call QueueUserWorkItem, the threadpool promises to run the delegate you passed some time in the future. (Note .NET does not encourage direct use of thread pool. Task.Factory.StartNew(Action) also queues methods to the .NET threadpool but makes it much easier to deal with error conditions and waiting on the result). What is a Asynchronous Programming? In the past, services used a 'multi-threaded' model of execution, where the service would create a thread for each concurrent request being handled. Each such thread would do all the work for that particular request from beginning to end, then move on to the next. This works well for low to medium scale serviced, but because a thread is a relatively expensive item (generally you want less than 1000 of them, and preferably < 100), this multi-threaded model does not work well if you want your service to be able to handle 1000s or 10000s of requests concurrently. To handle such scale, you need an Asynchronous style of programming and ASP.NET Core is built with this architecture. In this model, instead of having a 'thread per concurrent request', you instead register callbacks when you do long operations (typically I/O), and reuse the thread to do other processing while you are waiting. This architecture means that just a handful of threads (ideally about the number of physical processors), can handle a very large number (1000s or more) concurrent requests. You can also see what asynchronous programming needs a threadpool, because as I/Os complete, you need to run the next 'chunk' of code, so you need a thread. Asynchrounous code uses the threadpool to get this thread run this small chunk of code, and then return the thread to the pool so the thread can run some other (unrelated) chunk. Why do High-scale Asynchronous Services have Problems with ThreadPool Starvation? When your whole service uniformly uses the asynchronous style of programing, then your service scales VERY well. Threads NEVER block when running service code because if the operation takes a while, then the code should have called a version that takes a callback (or returns a System.Threading.Tasks.Task) and causes the rest of code to run when the I/O completes (C# has a magic 'await' statement that looks like it blocks, but in fact schedules a callback to run the next statement when the await statement completes). Thus threads never block except in the threadpool waiting for more work, so only a modest number of threads are needed (basically the number of CPUs on the machine). Unfortunately, when your service is running at high scale, it does not take much to ruin your perf. For example imagine you had 1000 concurrent requests being processed by a 16 processor machine. Using async, the threadpool only needs 16 threads to service those 1000 requests. For the sake of simplicity, that the machine has 16 CPUs and the threadpool has only one thread per CPU (thus 16) Now lets imaging that someone places a a Sleep(200) at the start of the request processing. Ideally you would like to believe that that would only cause each request to be delayed by 200 msec and thus having a response time of 400 msec. However the threadpool only has 16 threads (which was enough when everything was async), which, all of a sudden, it is not enough. Only the first 16 threads run, and for 200 msec are just sleeping. Only after 200 msec are those 16 threads available again, so another 16 requests can go etc. Thus the first requests are delayed by 200 msec, the second set for 400msec, the third by 600 msec. The AVERAGE response time to does up to over 6 SECONDS for a load of 1000 simultaneous requests. Now you can see why in some cases throughput can go from fine to more than terrible with just a very small amount of blocking. This example is contrived, but it illustrates the main point. if you add ANY blocking, the number of threads needed in the threadpool jumps DRAMATICALLY (from 16 to 1000). The .NET Threadpool DOES try to inject more threads, but does it at a modest rate (e.g 1 or 2 a second), so it makes little difference in the short term (you need many minutes to get up to 1000). Moreover, you have lost the benefit of being async (because now you need roughly a thread per request, which is what you were trying to avoid). So you can see there is a 'cliff' with asynchronous code. You only get the benefits of asynchronous code if you 'almost never block'. If you DO block, and you are running at high scale, you are very likely to quickly exhaust the Threadpool's threads. The threadpool will try to compensate, but it will take a while, and frankly you lost the benefits of being async. The correct solution is to avoid blocking on the 'hot' paths in an high scale service. What typically causes blocking? The common causes of blocking include - Calling any API that does I/O (thus may block) but is not Async (since it is not an Async API, it HAS to block if the I/O can't complete quickly) - Calling Task.Wait() or Task.GetResult (which calls Task.Wait). Using these APIs are red flags. Ideally you are in an async method and you use 'await' instead. So in a nutshell, this is why threadpool starvation is becoming more common. High scale async application are becoming more common, and it is very easy to introduce blocking into your service code, which forces you off the 'golden path' for async, requiring many threadpool threads, which cause starvation. How do I know that the ThreadPool is starved for threads? So how do you know this bad thing is happening? You start with the symptoms above, which is that your CPUs are not as saturated as you would want. From here I will show you a number of symptoms that you can check that give ever more definitive answer to the question. The details do change depending in the operating system. I will be showing the windows, case but I will also describe what to do on Linux. As always, whenever you have a problem with a service, it is useful get a detailed performance trace. On windows this means downloading PerfView taking a PerfView Trace - PerfView /threadTime collect On Linux it currently means taking a trace with perfCollect however in release 2.2 of .NET Core you will be able to collect traces using a 'dotnet profile' command (details are TBD, I will update when that happens). If you are using Application Insights, you can use the Application Insights profiler to capture a trace as well. You only need 60 seconds of trace when the service is under load but not performing well. Look for growing thread count. A key symptom of threadpool starvation is the fact that the threadpool DOES detect that it is starved (there is work but no threads), and is trying to fix it by injecting more threads, but does it (by design) at a slow rate (about 1-2 times a second). Thus In PerfView's 'events view' (on windows), you will see the OS kernel events for new threads showing up at that rate. Notice that it adds about 2 threads a second. Linux traces don't include OS events in the 'events' view but there are .NET Runtime events that will tell you every time a thread is created. You should look at the following events - Microsoft-Windows-DotNETRuntime/IOThreadCreation/Start - windows only (it has a special queue for threads blocked on certain I/O) for new I/O workers - Microsoft-Windows-DotNETRuntime/ThreadPoolWorkerThread/Start - logged on both Linux and Windows for new workers - Microsoft-Windows-DotNETRuntime/ThreadPoolWorkerThreadAdjustment/Adjustment - indicates normal worker adjustment (will show increasing count Here is an example of what you might see on Linux. This will continue indefinitely if the load is high enough (and thus the need for more threads is high enough). It may also show up in OS performance metrics that count threads in a given process. (so you can do it without PerfView at all if necessary). Finding the blocking API If you have established that you do have a threadpool starvation problem, as mentioned the likely issue is that you called a blocking API that is consuming a thread for too long. On windows, PerfView can show you where you are blocking with the 'Thread Time (with StartStop Activities) view". This view should show you all the service requests in the 'Activities' node, and if you look at these, you should see a pattern of 'BLOCKED_TIME" being consumed. The API that causes that blocking is the issue. Unfortunately this view is not available on Linux today using 'perfCollect'. However the application insights profiler should work and show you equivalent information. In version 2.2 of the runtime the 'dotnet profile' should also work (TBD) for ad-hoc collection. Be Proactive. Not that you don't need to wait for your service to melt down to find 'bad blocking' in your code. There is nothing stopping you from running the PerfView /Collect on your development box on just about ANY load, and simply look for blocking. That should be fixed. You don't need to actually induce a high-scale environment and see the threadpool starvation, you know that that will happen if you have large scale (1000s of concurrent requests), and you block for any length of time (even 10msec is too much, if it is happening on every request). Thus you can be proactive and solve the problem before it is even a problem. Work-around: Force more threads in the ThreadPool As mentioned already, the real solution to thread-pool starvation is to remove the blocking that consuming threads. However you may not be able to modify the code to do this easily, and need SOMETHING that will help in the very short term. The ThreadPool.SetMinThreads can set the minimum number of threads for the ThreadPool (on windows there is a pool for I/O threads and a pool for all other work, and you have to see from your trace which kind of thread is being created constantly to know which to set). The normal worker thread minimum can also be set using the environment variable COMPlus_ForceMinWorkerThreads, but there is not environment variable for the I/O threadpool (but that only exists on Windows). Generally this is a bad solution because you may need MANY threads (e.g. 1000 or more), and that is inefficient. It should only be used as a stop-gap measure. Summary: So now you know the basics of .NET ThreadPool Starvation. - It is something to look for when you service is not performing well and CPU is not saturated. - The main symptom is a constantly increasing number of threads (as the threadpool tries to fix the starvation) - You can determine things more definitively by looking at .NET runtime events that show the threadpool adding threads. - You can then use the normal 'thread time' view to find out what is blocking during a request. - You can be proactive and look for blocked time BEFORE at-scale deployment, and head off these kinds of scalability issues. - Removing the blocking is best, but if not possible increasing the size of the ThreadPool will at least make the service function in the short term. There you have it. Sadly this is become more common than we would like. We are likely to add better diagnostic capabilities to make this easier to find, but this blog entry helps in the mean time. Va
https://docs.microsoft.com/en-us/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall
2022-05-16T07:53:35
CC-MAIN-2022-21
1652662510097.3
[]
docs.microsoft.com
: Use as close as possible to the posix function call: You can also rethrow a StandardException, useful if you want to track the source of the exception as it goes back up the program heirarchy. This is done by passing the old exception to the constructor call of a new throw. macro. When NDEBUG is defined, the throw will automatically disappear from the program code. Catching can be done with the usual try-catch blocks, but if you want to pre-process this code away as well, you can use the debug_try and debug_catch macros. In addition, you may use the assert_throw macro (a mechanism similar to the run_time_assert function in ecl_errors) as follows:).
http://docs.ros.org/en/noetic/api/ecl_exceptions/html/errorsExceptions.html
2022-05-16T08:00:41
CC-MAIN-2022-21
1652662510097.3
[]
docs.ros.org
This article introduces the billing policy for the whiteboard service provided by Agora. Billing for the whiteboard service begins once you enable and implement the service in your project. Agora sends your bill and deducts the payment from your account on a monthly basis. For details, see Billing, fee deduction, and account suspension. At the end of each month, Agora adds up the usage of each whiteboard feature in all projects under your Agora account and subtracts your monthly free usage allowances. Agora multiplies each resulting usage number by the corresponding price and adds up the cost of each feature to get the total cost for that month. Agora's whiteboard service provides the following features: Cost of each whiteboard feature = (monthly total usage - free-of-charge usage) × unit price Total cost = online whiteboard cost + file conversion cost The unit price for each whiteboard feature is as follows: This section describes how to calculate the usage of each whiteboard feature. The usage duration of each whiteboard room equals the total sum of usage duration of all users in the room. For each user, Agora calculates the usage duration from the user joining a room to the user leaving the room. The usage duration is calculated in minutes. Agora calculates the usage amount by the number of images and web pages successfully converted from source files. Agora gives each whiteboard feature the following free-of-charge usage each month: You can check your usage of the whiteboard service on Agora Console. Perform the following steps: Log in to Agora Console, and click the Products & Usage button on the left navigation panel. Click the arrowhead in the top left corner, and select the project you want to check in the drop-down box. Click Duration under Whiteboard, select a time frame, and check the usage duration. This section shows how to calculate your monthly usage of the whiteboard service, as well as the total cost based on the corresponding unit price. User A joins a whiteboard room to give an online lecture and successfully converts a 50-page PPTX file to HTML files using the file conversion feature. Another 200 users join the room to watch the lecture. The lecture lasts 60 minutes. When the lecture ends, all users leave the room at the same time. The usage calculation is as follows: The following table shows the calculation of the total cost of the lecture: disconnect()to cut off a user's connection when the user leaves the room, and ensure that you receive the onPhaseChanged(disconnected)callback.
https://docs.agora.io/en/whiteboard/billing_whiteboard?platform=Android
2022-05-16T08:40:42
CC-MAIN-2022-21
1652662510097.3
[]
docs.agora.io
Overview While you can configure and edit parts of HPE Consumption Analytics platform in different sequences, HPE recommends that you follow the sequence listed below. Each step in the sequence includes a link to the article containing detailed information about that configuration step. - Create one or more collections. A collection defines where HPE Consumption Analytics platform should collect usage data, and provides the credentials for accessing that data. For more information, see Collections. - Create SmartTag rules that define how your collected data is transformed and augmented before it is used in reports. For example, you can supplement your collected cost and usage data with business information. For more information, see SmartTag Rules. - Create views that provide data in a concise dashboard of charts and reports, tailored for the specific needs of your users. For more information, see Views. - Create reports that allow you and others to view your cloud data. For more information, see Reports. - Create insight rules that alert you about situations in your cloud that are either costing you money or are potentially harming your environment in some other way. For more information, see Insights. - Create groups and user accounts for the people that need to view usage and cost data. For more information, see Permissions The following path (clickable map) appears at the top of each article in this process, to help you navigate: You can hover over each step indicator to view the article title in a tooltip, as shown above.
https://docs.consumption.support.hpe.com/CCS/HPE_Consumption_Analytics_Portal_User_Guides/Configuring_the_HPE_Consumption_Analytics_Portal/0010_Overview
2022-05-16T09:19:17
CC-MAIN-2022-21
1652662510097.3
[]
docs.consumption.support.hpe.com
Tutorial: Implement the data lake capture pattern to update a Databricks Delta table This tutorial shows you how to handle events in a storage account that has a hierarchical namespace. You'll build a small solution that enables a user to populate a Databricks Delta table by uploading a comma-separated values (csv) file that describes a sales order. You'll build this solution by connecting together an Event Grid subscription, an Azure Function, and a Job in Azure Databricks. In this tutorial, you will: - Create an Event Grid subscription that calls an Azure Function. - Create an Azure Function that receives a notification from an event, and then runs the job in Azure Databricks. - Create a Databricks job that inserts a customer order into a Databricks Delta table that is located in the storage account. We'll build this solution in reverse order, starting with the Azure Databricks workspace. Prerequisites If you don't have an Azure subscription, create a free account before you begin. Create a storage account that has a hierarchical namespace (Azure Data Lake Storage Gen2). This tutorial uses a storage account named contosoorders. Make sure that your user account has the Storage Blob Data Contributor role assigned to it. See Create a storage account to use with Azure Data Lake Storage Gen2. password values into a text file. You'll need those values later. Create a sales order First, create a csv file that describes a sales order, and then upload that file to the storage account. Later, you'll use the data from this file to populate the first row in our Databricks Delta table. Open Azure Storage Explorer. Then, navigate to your storage account, and in the Blob Containers section, create a new container named data. For more information about how to use Storage Explorer, see Use Azure Storage Explorer to manage data in an Azure Data Lake Storage Gen2 account. In the data container, create a folder named input. Paste the following text into a text editor. InvoiceNo,StockCode,Description,Quantity,InvoiceDate,UnitPrice,CustomerID,Country 536365,85123A,WHITE HANGING HEART T-LIGHT HOLDER,6,12/1/2010 8:26,2.55,17850,United Kingdom Save this file to your local computer and give it the name data.csv. In Storage Explorer, upload this file to the input folder. Create a job in Azure Databricks In this section, you'll perform these tasks: - Create an Azure Databricks workspace. - Create a notebook. - Create and populate a Databricks Delta table. - Add code that inserts rows into the Databricks Delta table. - Create a Job.. The workspace creation takes a few minutes. To monitor the operation status, view the progress bar at the top. Create a Spark cluster in Databricks In the Azure portal, go to the Azure Databricks workspace that you created, and then select Launch Workspace. You are redirected to the Azure Databricks portal. From the portal, select New > Cluster. In the New cluster page, provide the values to create a cluster. Accept all other default values other than the following: - Enter a name for the cluster. - Make sure you select the Terminate after 120. Create a notebook In the left pane, select Workspace. From the Workspace drop-down, select Create > Notebook. In the Create Notebook dialog box, enter a name for the notebook. Select Python as the language, and then select the Spark cluster that you created earlier. Select Create. Create and populate a Databricks Delta table In the notebook that you created, copy and paste the following code block into the first cell, but don't run this code yet. Replace the appId, tenantplaceholder values in this code block with the values that you collected while completing the prerequisites of this tutorial. dbutils.widgets.text('source_file', "", "Source File")", "<password>") spark.conf.set("fs.azure.account.oauth2.client.endpoint", " adlsPath = 'abfss://[email protected]/' inputPath = adlsPath + dbutils.widgets.get('source_file') customerTablePath = adlsPath + 'delta-tables/customers' This code creates a widget named source_file. Later, you'll create an Azure Function that calls this code and passes a file path to that widget. This code also authenticates your service principal with the storage account, and creates some variables that you'll use in other cells. Note In a production setting, consider storing your authentication key in Azure Databricks. Then, add a look up key to your code block instead of the authentication key. For example, instead of using this line of code: spark.conf.set("fs.azure.account.oauth2.client.secret", "<password>"), you would use the following line of code: spark.conf.set("fs.azure.account.oauth2.client.secret", dbutils.secrets.get(scope = "<scope-name>", key = "<key-name-for-service-credential>")). After you've completed this tutorial, see the Azure Data Lake Storage Gen2 article on the Azure Databricks Website to see examples of this approach. Press the SHIFT + ENTER keys to run the code in this block. Copy and paste the following code block into a different cell, and then press the SHIFT + ENTER keys to run the code in this block. from pyspark.sql.types import StructType, StructField, DoubleType, IntegerType, StringType inputSchema = StructType([ StructField("InvoiceNo", IntegerType(), True), StructField("StockCode", StringType(), True), StructField("Description", StringType(), True), StructField("Quantity", IntegerType(), True), StructField("InvoiceDate", StringType(), True), StructField("UnitPrice", DoubleType(), True), StructField("CustomerID", IntegerType(), True), StructField("Country", StringType(), True) ]) rawDataDF = (spark.read .option("header", "true") .schema(inputSchema) .csv(adlsPath + 'input') ) (rawDataDF.write .mode("overwrite") .format("delta") .saveAsTable("customer_data", path=customerTablePath)) This code creates the Databricks Delta table in your storage account, and then loads some initial data from the csv file that you uploaded earlier. After this code block successfully runs, remove this code block from your notebook. Add code that inserts rows into the Databricks Delta table Copy and paste the following code block into a different cell, but don't run this cell. upsertDataDF = (spark .read .option("header", "true") .csv(inputPath) ) upsertDataDF.createOrReplaceTempView("customer_data_to_upsert") This code inserts data into a temporary table view by using data from a csv file. The path to that csv file comes from the input widget that you created in an earlier step. Add the following code to merge the contents of the temporary table view with the Databricks Delta table. %sql MERGE INTO customer_data cd USING customer_data_to_upsert cu ON cd.CustomerID = cu.CustomerID WHEN MATCHED THEN UPDATE SET cd.StockCode = cu.StockCode, cd.Description = cu.Description, cd.InvoiceNo = cu.InvoiceNo, cd.Quantity = cu.Quantity, cd.InvoiceDate = cu.InvoiceDate, cd.UnitPrice = cu.UnitPrice, cd.Country = cu.Country WHEN NOT MATCHED THEN INSERT (InvoiceNo, StockCode, Description, Quantity, InvoiceDate, UnitPrice, CustomerID, Country) VALUES ( cu.InvoiceNo, cu.StockCode, cu.Description, cu.Quantity, cu.InvoiceDate, cu.UnitPrice, cu.CustomerID, cu.Country) Create a Job Create a Job that runs the notebook that you created earlier. Later, you'll create an Azure Function that runs this job when an event is raised. Click Jobs. In the Jobs page, click Create Job. Give the job a name, and then choose the upsert-order-dataworkbook. Create an Azure Function Create an Azure Function that runs the Job. In the upper corner of the Databricks workspace, choose the people icon, and then choose User settings. Click the Generate new token button, and then click the Generate button. Make sure to copy the token to safe place. Your Azure Function needs this token to authenticate with Databricks so that it can run the Job. Select the Create a resource button found on the upper left corner of the Azure portal, then select Compute > Function App. In the Create page of the Function App, make sure to select .NET Core for the runtime stack, and make sure to configure an Application Insights instance. In the Overview page of the Function App, click Configuration. In the Application Settings page, choose the New application setting button to add each setting. Add the following settings: In the overview page of the function app, click the New function button. Choose Azure Event Grid Trigger. Install the Microsoft.Azure.WebJobs.Extensions.EventGrid extension if you're prompted to do so. If you have to install it, then you'll have to choose Azure Event Grid Trigger again to create the function. The New Function pane appears. In the New Function pane, name the function UpsertOrder, and then click the Create button. Replace the contents of the code file with this code, and then click the Save button: using "Microsoft.Azure.EventGrid" using "Newtonsoft.Json" using Microsoft.Azure.EventGrid.Models; using Newtonsoft.Json; using Newtonsoft.Json.Linq; private static HttpClient = new HttpClient(); public static async Task Run(EventGridEvent eventGridEvent, ILogger log) { log.LogInformation("Event Subject: " + eventGridEvent.Subject); log.LogInformation("Event Topic: " + eventGridEvent.Topic); log.LogInformation("Event Type: " + eventGridEvent.EventType); log.LogInformation(eventGridEvent.Data.ToString()); if (eventGridEvent.EventType == "Microsoft.Storage.BlobCreated" | | eventGridEvent.EventType == "Microsoft.Storage.FileRenamed") { var fileData = ((JObject)(eventGridEvent.Data)).ToObject<StorageBlobCreatedEventData>(); if (fileData.Api == "FlushWithClose") { log.LogInformation("Triggering Databricks Job for file: " + fileData.Url); var fileUrl = new Uri(fileData.Url); var = new HttpRequestMessage { Method = HttpMethod.Post, RequestUri = new Uri(String.Format(" System.Environment.GetEnvironmentVariable("DBX_INSTANCE", EnvironmentVariableTarget.Process))), Headers = { { System.Net.HttpRequestHeader.Authorization.ToString(), "Bearer " + System.Environment.GetEnvironmentVariable ("DBX_PAT", EnvironmentVariableTarget.Process)}, { System.Net.HttpRequestHeader.ContentType.ToString (), "application/json" } }, Content = new StringContent(JsonConvert.SerializeObject(new { job_id = System.Environment.GetEnvironmentVariable ("DBX_JOB_ID", EnvironmentVariableTarget.Process) , notebook_params = new { source_file = String.Join("", fileUrl.Segments.Skip(2)) } })) }; var response = await response.EnsureSuccessStatusCode(); } } } This code parses information about the storage event that was raised, and then creates a request message with url of the file that triggered the event. As part of the message, the function passes a value to the source_file widget that you created earlier. the function code sends the message to the Databricks Job and uses the token that you obtained earlier as authentication. Create an Event Grid subscription In this section, you'll create an Event Grid subscription that calls the Azure Function when files are uploaded to the storage account. In the function code page, click the Add Event Grid subscription button. In the Create Event Subscription page, name the subscription, and then use the fields in the page to select your storage account. In the Filter to Event Types drop-down list, select the Blob Created, and Blob Deleted events, and then click the Create button. Test the Event Grid subscription Create a file named customer-order.csv, paste the following information into that file, and save it to your local computer. InvoiceNo,StockCode,Description,Quantity,InvoiceDate,UnitPrice,CustomerID,Country 536371,99999,EverGlow Single,228,1/1/2018 9:01,33.85,20993,Sierra Leone In Storage Explorer, upload this file to the input folder of your storage account. Uploading a file raises the Microsoft.Storage.BlobCreated event. Event Grid notifies all subscribers to that event. In our case, the Azure Function is the only subscriber. The Azure Function parses the event parameters to determine which event occurred. It then passes the URL of the file to the Databricks Job. The Databricks Job reads the file, and adds a row to the Databricks Delta table that is located your storage account. To check if the job succeeded, open your databricks workspace, click the Jobs button, and then open your job. Select the job to open the job page. When the job completes, you'll see a completion status. In a new workbook cell, run this query in a cell to see the updated delta table. %sql select * from customer_data The returned table shows the latest record. To update this record, create a file named customer-order-update.csv, paste the following information into that file, and save it to your local computer. InvoiceNo,StockCode,Description,Quantity,InvoiceDate,UnitPrice,CustomerID,Country 536371,99999,EverGlow Single,22,1/1/2018 9:01,33.85,20993,Sierra Leone This csv file is almost identical to the previous one except the quantity of the order is changed from 228to 22. In Storage Explorer, upload this file to the input folder of your storage account. Run the selectquery again to see the updated delta table. %sql select * from customer_data The returned table shows the updated record. Clean up resources When they're no longer needed, delete the resource group and all related resources. To do so, select the resource group for the storage account and select Delete. Next steps Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-events?toc=/azure/storage/blobs/toc.json
2022-05-16T10:08:45
CC-MAIN-2022-21
1652662510097.3
[]
docs.microsoft.com
Product-level privilege required by ONTAP tools for VMware vSphere Contributors To access the ONTAP® tools for VMware vSphere GUI, you must have the product-level, VSC-specific View privilege assigned at the correct vSphere object level. If you log in without this privilege, VSC displays an error message when you click the NetApp icon and prevents you from accessing VSC. The following information describes the VSC product-level View privilege:
https://docs.netapp.com/us-en/ontap-tools-vmware-vsphere-98/concepts/reference_product_level_privilege_required_by_ontap_tools_for_vmware_vsphere.html
2022-05-16T09:54:53
CC-MAIN-2022-21
1652662510097.3
[]
docs.netapp.com
HR - Introduction - Dedicated mobile application - The employees - Time sheets - Overtime hours - Employee training - Internal employee evaluation - Leave of absence - Expense reports Introduction Introduction This application allows you to manage the company’s human resources. It consists of different sub-applications covering a broad spectrum of human resources. It allows you to manage: Employee management including employment contracts, payroll preparation, meal and bonus voucher management, leave requests, expense reports, timesheets, overtime, recruitment, training, evaluations. You can choose whether or not to enable the different sub-applications. Keywords : Payroll preparation: the application allows you to take charge of all payroll preparation upstream, gathering all the information necessary for its processing. This preparation must then be exported to certified payroll software (according to the legislation in force in the country). Time sheet: A time sheet allows employees to indicate the time they have spent on tasks and/or projects. This function is often used in project management. Employee Management: Human Resources Management Application : Employee List: Displays the list of employees and allows you to create employee records. Employment contracts: Allows you to create employment contracts for employees. Payroll Preparation: Menu to manage the preparation of employee payroll. Management of restaurant vouchers: Allows you to manage meal vouchers. Bonus management: Allows you to manage employee bonuses. Configuration: Activities: Allows you to create product-type activities (Project manager, project, audit…) that can be performed by employees and invoiced to customers / Reasons for termination of contract: Allows you to create reasons for termination of employment contract / Models of employment contract: Allows you to create employment contract templates / Types of bonuses : Allows you to create bonus types for employees / Employment contract types: Allows you to create employment contract types / Payroll years: Allows you to create payroll years / Payroll period: Allows you to create payroll periods / Tax positions: Displays tax positions / Schedules : Manages employee schedules (Event schedules: Allows you to create event schedules for employees (e. g. public holiday schedule) / Weekly schedule: Allows you to create weekly schedules for employees). Recruitment: Recruitment management application: Job opportunities: Allows you to create and manage job opportunities. Job History: Displays the history of all job offers. All current applications: Displays all current applications. Application: Displays all applications and allows you to create new ones. Configuration: Diploma level: Allows you to create diploma levels / Recruitment step: Allows you to create recruitment steps / Provenance : Allows you to create sources to indicate the origin of an application / Skills : Allows you to create professional skills titles Training: Training management application: My training courses: Displays the courses of the active user. Staff training. Displays the trainings of the team of the active user (profile manager). All training courses : Displays all courses. Training dashboard: Training dashboards. Configuration: Categories : Allows you to create training categories / Training courses : Allows you to create training courses / Training sessions: Allows you to create training sessions. Evaluations : Internal evaluation management application: My evaluations: Displays the evaluations of the active user. My Team Evaluations: Displays the evaluations of the active user’s team (for a manager or HR profile). Evaluations : Displays all evaluations. Configuration: Evaluation templates : Allows you to create evaluation templates / Types of evaluations : Allows you to create evaluation types. Requests for leave: Application for managing leave requests: Complete my leave request: Allows you to complete and send a leave request. All my requests for leave: Displays all leave requests from the active user. Leave requests to be validated: Displays the leave requests that are to be validated (for a manager profile). Team leave request history: Calendar displaying the leave requests of the active user and his team if he has the status of manager. Absence schedule: Allows you to create leave requests after the fact in the event of an unforeseen absence that the employee will have to justify. Absence to justify: Allows a superior or HR manager to create an absence to justify for an employee Configuration: Absence type: Allows you to create absence reasons. Expense report: Application for managing expense reports : Complete my expense report: Allows you to complete and send an expense report. All my expense reports: Displays all expense reports of the active user. NdeF to be validated: Displays the expense reports that are to be validated (for a manager or HR profile). Ndef to be broken down: Displays all expense reports that need to be broken down. Multi-collaborator NDF: Allows you to manage expense reports that are multi-collaborators Team Expense History: Displays the history of the team’s expense reports Configuration: Expense type: Allows you to create expense report types / Tax powers: Allows you to create vehicle tax powers. Timesheet: Timesheet management application : Complete my timesheet: Allows you to complete and send a timesheet. All my timesheets: Displays all timesheets of the active user. Time sheets to be validated: Displays the timesheets to be validated (for a manager or HR profile). Team timesheet history: Displays the history of a team’s timesheets. Stopwatch : Allows to launch a stopwatch to calculate the time on the timesheets Overtime: Overtime management application: Enter overtime hours: Allows you to enter and send overtime hours. All my overtime hours: Displays all overtime hours for the active user. Overtime to be validated: Displays the overtime hours that are to be validated (for a manager or HR profile). Overtime history: Overtime history. Dashboards: Human resources application dashboards: HR Manager: HR dashboards for managers User HR: HR dashboards of the active user. Dedicated mobile application The employees Manage the list of employees, employment contracts, payroll preparation, restaurant vouchers, bonuses Create an employee record The employee record contains basic information: contact details, employment contract, holiday counter, etc. The creation of an employee record is done step by step. As long as the process is not finished, you always have the possibility to go back to the previous steps. You must necessarily fill in a weekly schedule and a schedule of public holidays. At the employment contract level there is a checkbox to indicate whether the employee benefits from the incentive bonus. If within the application configurations of the HR Module/Holiday Management you have activated the option "Allow negative values for employee leave", the corresponding box will be checked by default, at the employee record level, on the Leave and Timesheets tab, however you can uncheck it. If the employee is subject to an entry of his working time, tick the corresponding box. Before completing the creation process of the master record you must create the user corresponding to the employee. You can attach it to an existing user or create a new one. If this is the case, you must create an identifier and associate a group of rights to it. You can also enter an application period. Determine the HR manager On an employee record, a checkbox allows you to indicate if the employee is the HR manager. The status of HR Manager provides access to all human resources information and history. Inform a line manager On an employee form you can fill in a line manager, in which case he will receive notifications (provided he has set up an email account in advance) when requesting leave, overtime or expenses. Fill in the employee’s vehicle In order for an employee to be able to calculate our mileage costs, it is necessary to set up his vehicle on his "Employee" form. Once the process of creating an "Employee" form is complete, you can, in the "Vehicle" tab, enter all the information concerning his vehicle, and even import his registration card. On an "Employee" form, under the "Skills" tab, you can enter skills that can be classified by type: training skills, professional skills and others. Prepare the payroll Select the company concerned, the employee, the contract concerned and the period. A "Refresh" button is used to update the information. It is possible to add lines on payroll preparation for specific cases. Manage employment contracts The contract makes it possible to determine various elements such as the function, remuneration and duration of the trial period. An employment contract can have 3 distinct statuses: "on probation", "active", "closed". Manage meal vouchers Select the company code concerned, as well as the periods taken into account for payroll and absences and start the calculation. After that you can manually enter the number of tickets to be deducted depending on the situation (passage to the canteen, days spent abroad, or even an invitation to lunch). Manage restaurant voucher advances On an "Employee" form, in the "Restaurant tickets" tab click on the "Add an advance" button, enter the date of distribution and the number of tickets given. This information will be included in the preparation elements. Manage premiums The types of bonuses granted to employees can be configured beforehand in the configurations of the Employee Management menu. It is necessary to enter application conditions and set up a calculation formula. If you want a bonus to be included in the payroll preparation elements, you must check the box "Present in payroll preparation exports". Select the company code concerned, the type of bonus, the periods taken into account for payroll and absences, the basis for calculating the bonus and start the calculation. Time sheets Time sheets A time sheet allows an employee to indicate the time spent on the tasks he or she has performed. Pre-configurations At the application configuration level there are several options: - Billing type for time spent: this option allows you to choose whether the timesheets will be billed according to the activity used on the timesheet lines, or according to the activity defined by default on the employee form. Consolidate timesheet lines for invoices Stopwatch: allows you to use an integrated stopwatch on your timesheets to accurately calculate the time spent on an activity. Keep the project on the stopwatch: the project that was used when the stopwatch was last used is automatically resumed when you restart the stopwatch again. Edit stopwatch: if the option is not activated, once the stopwatch has stopped, the timesheet line created is read-only and you cannot change it. If you enable this option, you can edit the line to modify it. Timesheet editor: this option allows you to activate a "graphical" editor on the timesheets that will allow you to enter your times directly on a schedule. Time sheet template: allows you to define if by default the end date of a time sheet will automatically be at the end of the week or at the end of the month. If necessary, go to the HR configuration of the user’s active company (Application Configuration > Users/Companies > Companies > Companies > HR Configuration) then in the "Timesheet Templates" box, activate "Timesheet Notification Mail". Create and manage a timesheet Add lines manually and indicate for each of them a project, a task, a date, an activity, the time spent. If this task is billable, check the corresponding box. You can add any comments you may have. It is possible to generate mass lines in 2 ways: activate the editor and select a project and an activity from the tools, use the line generation wizard. At the end of the entry, click on "Confirm", the timesheet will then be waiting for validation by a superior. Use the timesheet editor This option allows you to activate a "graphical" editor on the timesheets that will allow you to enter your times directly on a schedule. On a timesheet check the "Editor" button. To add a project to your schedule click on the "New" button and select the project concerned. Then you can enter the time spent, by project and by day. Recover time on tasks Click on the "Tools" button on the taskbar, you can retrieve timelines, either forecasted or completed. Timelines are automatically filled from the schedules, however they can be modified manually. Generate lines automatically You can generate lines automatically, over a given period of time. To do this from the "Tools" button click on "Line generation wizard". Select a project and an activity as well as a working time per day. Then generate the lines. Validate a time sheet As a line manager, you can view each timesheet and decide whether to approve or reject it. If you refuse a timesheet, it returns to the person who created it in the status "refused". The latter then has the option of cancelling it or changing it back to draft status to modify it. Invoice time sheets Timesheet lines can be invoiced, especially in the context of a business deal. You have the choice in the options of the "Timesheets" sub-application between two types of invoicing: use line activity or use employee activity. If you use the line activity, you will be able to invoice the different timesheet lines of an employee, with potentially different activities per timesheet line. While having detailed times and activities can be useful for an employer, you don’t necessarily need as much detail when billing a client for time on a business or project, for example. In this case, you can use the employee activity for timesheet billing. In an employee file, in the timesheet tab, you can define by default an activity for an employee, in the field "Product for time spent". Send notification emails It is possible to receive an e-mail for each new time sheet to be validated, validated and refused. To do this, simply go to the HR configuration of the user’s active company (Application Configuration > Users > Companies > HR Configuration) then in the "Timesheet Templates" box, activate "Timesheet Notification Mail", which will allow you to configure e-mail sending templates for sent, validated and rejected timesheets. Using the stopwatch In order to enter times automatically and accurately, it is possible to use the stopwatch. By clicking on "Start", the stopwatch starts. When it is finished you click on "Stop" and a timesheet line is automatically created. You have the possibility to pause and restart the stopwatch whenever you wish. The timesheet will then be created when the stopwatch has been definitively stopped. Overtime hours Create and manage overtime Manually add overtime lines. For each line indicate a date, a number of hours and possibly a description. Validate overtime hours As a line manager you can view each overtime report and decide whether to approve or reject it. If you reject an overtime report, it goes to the employee who created it in the status "rejected". The latter then has the option of cancelling it or changing it back to draft status to modify it. Send notification emails It is possible to receive an e-mail for each additional hour to be validated, validated and refused. To do this, simply go to the HR configuration of the user’s active company (Application Configuration > Users > Company > HR Configuration) and then in the "Overtime templates" box, activate "Overtime notification mail", which will allow you to configure the email sending templates for sent, validated and rejected overtime Create and track job offers Job offers are created from the "Job offers" menu. Once the different fields of an offer have been filled in, you can click on the status "Open" when you want to indicate that the offer is in progress. Employee training Create a training course Training courses are created from the "Configuration" submenu. Fill in all the information related to the course you have created and save. You can register for training courses from the "Training" menu. You must choose your training and session and click on "Accept/plan". Once the training is completed, you must click on the "Training completed" button and give a note to the training. Create training sessions Training sessions are created from the Configuration menu > Training Sessions. You must select the business event for which you want to create the session, dates and number of hours, and save. Once the session is over, you can click on the "Logout" button. Manage registrations for a training session You can register for training courses from the "Training" menu. You must choose your training and session and click on "Accept/plan". If you want the training dates to appear on a calendar, select a calendar and once the training is completed, click on the "Training completed" button and give a note to the training. Validate a training request As a line manager, you have access to your employees' training requests via the "Staff training" submenu. You can then validate them. Manage the skills acquired during training When creating a training course, you can freely add skills (in the "Skills" field) in the form of tags. These will be skills that will be acquired during the training. These skills will automatically be associated with the employees who will participate in the training. Internal employee evaluation Create and manage internal evaluation processes You can create new evaluations from different menus: my evaluations, my team’s evaluations and evaluations. Enter the name of the employee, the person responsible for the evaluation, the type of evaluation and, if necessary, add a description. Clicking on the "Send" button is purely informative. Generate evaluations from a model An appraisal model must be linked to an appraisal type that has already been created. Enter a due date and the name of the person responsible for the training. You can possibly enter a description. Then click on the "Create evaluations" button. Select the employees concerned. Clicking on the "Send" button is purely informative. Leave of absence Configuring types of leave In the configurations of the "Leave requests" sub-menu, you must create types of leave (e.g.: Absence to justify, paid leave, etc.). For each type of absence you have access to different options: you can create a counter (in this case you can, or not, authorize the injection), you can authorize negative values by type of leave. You can fill in operating instructions. If you tick the box "Selection by HR department only", this type of leave can only be selected by the person identified as HR Manager. If you want certain types of leave to be included in the preparation elements, you must check the box "Present in payroll preparation exports". Create leave requests By clicking on "Complete my leave request", you open a new leave request, if no other is in progress, in draft status. You must fill in a reason and dates. This menu allows you to create a leave request after the date of the request, which is not possible with a traditional leave request. A reason is required to validate or record the leave request. In the "Actions" tab on the right, click on "Request sent leave" to send it. Validate leave requests As the line manager, you will find all the leave requests to be validated in the "Leave requests to be validated" menu. Allow negative values Allowing negative values for leave means that you allow an employee to take leave even if their balance is 0. You must activate the option in the company’s HR configurations, by clicking on "Allow negative values for employee leave". From now on, each time a new employee record is created, the option will be automatically activated, but can be modified manually. The authorization of negative values can also be managed by absence type (checkbox on the form). Manage unjustified absences If an employee is absent and it is an unforeseen absence, i.e. he or she was unable to request leave before his or her absence, this menu allows you to create a request for leave a posteriori. Send notification emails It is possible to receive an e-mail for each new leave request to be validated, validated and refused. To do this, simply go to the HR configuration of the user’s active company and then to the "Holiday templates" box, activate "Holiday request notification mail", which will allow you to configure the e-mail sending templates for sent, validated and rejected holiday requests. Expense reports Create and manage an expense report You can create an expense report from the sub-menu" "Complete my expense report"". Enter the project related to the expense; the type of expense, the date of the expense, the amount. A checkbox allows you to specify if this expense is re-billable to a customer. A distinction is made between overheads and mileage costs. The fees may have been paid with a business credit card or it may be a cash withdrawal with a business card as well. Manage mileage costs The expense report contains a "Kilometre costs" tab. Indicate the project related to this trip, the date, the tax power of the vehicle used, whether it is a one-way or round trip, the distance travelled, the departure and arrival cities and any comments. Manage multi-collaborator expense reports From the corresponding sub-menu, create your expense report; adding one line per employee associated with the expense. Manage NDF advances (one-time or permanent) On an "Employee" form, under the "Expense Report" tab, it is possible to request an advance, whether it is a one-time or permanent advance. Enter an amount and a payment method, and click on "OK" to send the request. Validate an expense report As a line manager, you can view each expense report and decide whether to approve or reject it. If you reject an expense report, it goes to the person who created it in the status "rejected". The latter then has the option of cancelling it or changing it back to draft status to modify it. Breakdown of an expense report From an expense report with validated status click on the "Breakdown" button. An accounting entry is then generated, provided that the accounting information related to expense reports has been previously configured in the company code application configurations of the Accounting module. Reimburse an expense report Depending on the options enabled, recording an expense payment in the system may only be informative. From an expense report with validated status click on the "Save payment" button. All you have to do is enter the payment method. The expense report will be considered as fully paid and will change to "Reimbursed" status. This is purely informative, you will have to manage the payment itself elsewhere. Send notification emails It is also possible to receive an e-mail for each expense report to be validated, validated and rejected. To do this, simply go to the HR configuration of the user’s active company (Application Configuration > Users > Companies > HR Configuration) and then in the "Expense note templates" box, activate "Expense note notification email", which will allow you to configure the email sending templates for sent, validated and rejected expense notes.
https://docs.axelor.com/abs/5.0/functional/hr.html
2022-05-16T08:36:13
CC-MAIN-2022-21
1652662510097.3
[]
docs.axelor.com
The temp module provides support for creating temporary directories and files. fn dir() str; fn file(io::mode, fs::mode...) (io::file | fs::error); fn named(*fs::fs, str, io::mode, fs::mode...) ((io::file, str) | fs::error); fn dir() str; Creates a temporary directory. This function only guarantees that the directory will have a unique name and be placed in the system temp directory, but not that it will be removed automatically; the caller must remove it when they're done using it via os::rmdir or os::rmdirall. The return value is statically allocated and will be overwritten on subsequent calls. fn file(iomode: io::mode, mode: fs::mode...) (io::file | fs::error); Creates an unnamed temporary file. The file may or may not have a name; not all systems support the creation of temporary inodes which are not linked to any directory. If it is necessary to create a real file, it will be removed when the stream is closed. The I/O mode must be either io::mode::WRITE or io::mode::RDWR. Only one variadic argument may be provided, if at all, to specify the mode of the new file. The default is 0o644. fn named( fs: *fs::fs, path: str, iomode: io::mode, mode: fs::mode... ) ((io::file, str) | fs::error); Creates a named temporary file in the given directory of the given filesystem. The caller is responsible for closing and removing the file when they're done with it. The name is statically allocated, and will be overwritten on subsequent calls.
https://docs.harelang.org/temp
2022-05-16T09:49:51
CC-MAIN-2022-21
1652662510097.3
[]
docs.harelang.org
Date: Mon, 16 May 2022 02:03:22 -0700 (PDT) Message-ID: <1489306036.10318.1652691802737@[13.52.244.161]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_10317_367927383.1652691802728" ------=_Part_10317_367927383.1652691802728 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: In addition to setting up security on individual Tags, you can set up security policies specific t= o each Security Zone.&nb= sp; Tag Access is one of the options for a Security Policy. This is configured on the Gateway Webp= age in the Config tab under Service For a Security Zone, you can set up one of five access levels to each Ta= g Provider: Typically, the Administrator will identify what zones have access to whi= ch Tag Providers.
https://docs.inductiveautomation.com/exportword?pageId=26019715
2022-05-16T09:03:23
CC-MAIN-2022-21
1652662510097.3
[]
docs.inductiveautomation.com
Getting started with a dedicated server Find out how to proceed after the delivery of your dedicated server Find out how to proceed after the delivery of your dedicated server Last updated 11th March 2022 A dedicated server is a physical server located in one of our data centres. Unlike Web Hosting plans (also referred to as "shared hosting"), which are technically managed by OVHcloud, you are fully responsible for the administration of your dedicated server. This guide will help you with the first steps of managing your dedicated server. If your server is of the Eco product line, go to this guide instead. When your dedicated server is first set up during the order process, you can select which operating system will be installed. You can easily reinstall your server and choose a different OS image in your OVHcloud Control Panel. From the General information tab, click on Install. ...next to the operating system and then click On the next screen, select either Install from an OVHcloud template or Install one of your templates in order to use a template for the installation. To be able to install a customised image on the server, choose the third option Install from custom image. Please refer to the BYOI guide to learn about the settings of this functionality. Some proprietary operating systems or platforms such as Plesk or Windows require licences which generate additional fees. You can buy licences via OVHcloud or from an external reseller. You will then need to apply your licence, in the operating system itself or by using your OVHcloud Control Panel. You can manage all your licences in the Bare Metal Cloud section under Licences. In this section, you can also order licences or add existing ones via the Actions button. Click Next to continue. After choosing Install from an OVHcloud template you can select the operating system from the drop-down menus. If you need to modify the partioning scheme of your operating system, check the box "Customise the partition configuration" before clicking on After you have finished your adjustments, click Next to arrive at the summary page. If you have selected a compatible GNU/Linux-based operating system, the option to activate RTM for the server will appear. Set the slider to Enabled to install it. You can find out more about the RTM feature in this guide. If you are installing a GNU/Linux-based based operating system, you can add your SSH key in the last step of the installation process. If you already have an SSH key registered, it will be listed in the drop down menu under "SSH keys" at the bottom. Otherwise, you will need to add one in the "My services" section first. To achieve this, open the sidebar navigation by clicking on your name in the top right corner and use the shortcut Service management. In "My services", switch to the SSH keys tab and click on Add an SSH key. As we are installing a dedicated server, make sure to select "Dedicated" from the drop-down menu (viable for a VPS as well). In the new window, enter an ID (a name of your choice) and the key itself (of type RSA, ECDSA or Ed25519) into the respective fields. For a detailed explanation on how to generate SSH keys, please refer to this guide.. Once the installation is completed, you will receive an email containing instructions for administrative access. You can connect to your server through a command terminal or with a third-party client by using SSH which is a secure communication protocol. Use the following examples to log on to your server, replacing the credentials with your actual information (IP address and server reference name are interchangeable). Example with root: ssh root@IPv4_of_your_server Example with a pre-configured user: ssh ubuntu@reference_name_of_your_server You can learn more about SSH in this guide. Once the installation is completed, you will receive an email containing your password for administrative (root) access. You will need to use these credentials to connect to the server via RDP (Remote Desktop Protocol). After logging in, Windows will guide you through an intial setup. Please also refer to our guide on Configuring a new Windows Server installation. A reboot might become necessary in order to apply updated configurations or to fix an issue. Whenever feasible, perform a "soft reboot" via the command line: reboot However, you can carry out a "hard reboot" at any time in your OVHcloud Control Panel. From the General information tab, click on Restart and Confirm the action in the popup window. ...next to "Status" in the Service status box, then click As explained in the “Objective” section of this guide, you are the administrator of your dedicated server. As such, you are responsible for your data and its security. You can learn more about securing your server in this guide. You can set the monitoring status for a dedicated server from the General information tab in your OVHcloud Control Panel (section Service status). If Monitoring is set to Enabled, you will notified via email every time the server is behaving in an unexpected way. You can disable these messages via the ...button. You can find more information about OVHcloud Monitoring in this guide.. All OVHcloud dedicated servers are delivered with a /64 IPv6 block. To use the addresses in this block, you will need to make some network configuration changes. Please refer to our guide: IPv6 Configuration. For any kind of issue, the first general troubleshooting step to take is rebooting your server into rescue mode from your OVHcloud Control Panel. It is important to identify server issues in this mode to exclude software-related problems before contacting our support teams. Please refer to the rescue mode guide. OVHcloud deploys all dedicated servers with an IPMI (Intelligent Platform Management Interface) console which runs in your browser or from a Java applet, and enables you to connect directly to your server even if it has no network connection. This makes it a useful tool for troubleshooting issues that may have taken your server offline. For more information, please refer to our guide: Using the IPMI with dedicated servers. OVHcloud dedicated servers have an access-controlled storage space as a gratuitous service option. It is best used as a complementary backup option in case the server itself suffers data loss. To activate and use the backup storage, please refer to this
https://docs.ovh.com/au/en/dedicated/getting-started-dedicated-server/
2022-05-16T08:06:15
CC-MAIN-2022-21
1652662510097.3
[]
docs.ovh.com
Pivot Painter Material Functions allow you to tap into the Pivot Painter MAXScript, which stores rotation information within the vertices of a mesh. This is a great way to handle dynamic motion on Static Meshes. Although the data provided by Pivot Painter can be utilized without these functions, they do make the process much easier. Pivot Painter Functions The following is a list of all the functions underneath the Pivot Painter category. These functions are used to process and organize world position and angle information stored in the model's UVs by the Pivot Painter MAXScript. PivotPainter_HierarchyData This particular function is specifically designed to work with object hierarchies. The outputs labeled "-----------------" exist as separators in the list and are not intended to be used. PivotPainter_PerObjectData This particular function is designed to work on a per-object basis. PivotPainter_PerObjectFoliageData This function is designed to work specifically with individual foliage objects. PivotPainter_TreeData The outputs starting with tree process the model's UV information as it would be stored by the Pivot Painter MAXScript. The outputs starting with Leaf process the UV information as it would be stored by the per-object pivot painting section of the script. The outputs labeled "-----------------" exist as separators in the list and are not intended to be used.
https://docs.unrealengine.com/4.26/en-US/RenderingAndGraphics/Materials/Functions/Reference/PivotPainter/
2022-05-16T10:08:33
CC-MAIN-2022-21
1652662510097.3
[]
docs.unrealengine.com
4.6.1 Release Notes Enhancements - New advanced option for AWS S3 data sources, Enable file status check, and new property for metadata storage in dremio.conf: debug: { dist.s3_file_status_check.enabled: enabled } These options control whether Dremio verifies that a file exists in AWS S3 and the distributed storage for a data source, respectively. Both are enabled by default. If users notice failed LOAD MATERIALIZATION or DROP TABLE data acceleration jobs when using AWS S3 for distributed storage, disable dist.s3_file_status_check.enabled in dremio.conf and disable the Enable file status check advanced option on the data source. - New metric, NUM_COLUMNS_TRIMMED, reports the number of trimmed columns in Parquet-formatted files. Fixed Issues in 4.6.1 Validation error with java.io.FileNotFoundException when refreshing a Data Reflection Fixed by disabling the new Enable file status check advanced option for AWS S3 data source and disabling the debug.dist.s3_file_status_check.enabled property in dremio.conf. Queries on Dremio metadata containing a WHERE clause with both LIKE and OR operators return incorrect results Fixed by correctly pushing down OR query filter. Executor nodes fail with ForemanException Fixed by removing unnecessary columns and rowgroups from footers of Parquet files. When asynchronous access is disabled, Dremio is unable to gather medata from the footers of Parquet files Fixed by reverting to a known working parquet footer. Dremio crashes with java.io.FileNotFoundException Fixed issue with data consistency during refreshes of Data Reflections for AWS S3 data sources. Inconsistent job status reported in job profile and job details Fixed by asynchronously handling completion events from executor nodes. Superfluous columns are not trimmed while scanning Data Reflections Fixed by adding a handler method.
https://docs.dremio.com/software/release-notes/461-release-notes/
2022-05-16T09:32:55
CC-MAIN-2022-21
1652662510097.3
[]
docs.dremio.com
Beand FameTheme in left menu to load the options page. Theme Options include : - General Settings - Styling Options - Advertisement - Single Post - Optin Form - Social Networks - SEO Marketing - Backup Options Custom Widgets The theme comes with unique custom widgets that can be used to configure how your site displays content – they can be found under Appearance > Widgets. Custom 200 x 125 Ads Widget This widget allows you to configure and display up-to 6 200 x 125 banner advertisements. - Title – Title of the widget - Ad image url – The URL of your banner image e.g. - Ad link url – The destination URL of your banner ad e.g. - Randomize – Check this box to display the Ad banners in a random order on each page load. Custom. Custom Recent Posts Widget This widget allows you to show your latest blog posts in widgetised areas. - Title – Title of the widget - Filter by Category – Choose a category for filtering. - Number of posts – Input the number of recent posts to display. Custom Popular Posts Widget This widget allows you to show your popular blog posts in widgetised areas. - Title – Title of the widget - Number of posts – Input the number of popular posts to display. Custom Newsletter Widget This widget allows you display a newsletter sign-up form – the widget has options to configure the title and add a description. The widget is handled configured via theme options and handled by feedburner and email marketing services as: Aweber, Mailchimp, Feedburner…: Sources and Credits - Stock Photos from PhotoDune - Google Font – Varela Round -.
https://docs.famethemes.com/article/36-beandot-documentation
2022-05-16T07:44:15
CC-MAIN-2022-21
1652662510097.3
[]
docs.famethemes.com
DataOps DataOps in Glean allows you to represent your resources as configuration files, which can be stored in a Git repository and deployed to your Glean project as part of a change management workflow. Using DataOps, you can: - ✅ Validate planned changes to your data warehouse or Glean resources to ensure that your views and dashboards don't break - 🏗️ Preview updates to your Glean project before making them visible to the rest of your organization - 👥 Use code reviews or pull requests to collaborate on proposed changes - 🧑💻 Configure Glean using the same tools that you use to develop your backend pipelines - 🚦 Use a continuous integration system to deploy updates to your Glean project DataOps is under active development and currently has some limitations, including: - The Glean configuration files do not yet support every feature that is available in the Glean web application. - If you build resources from a Git repository, users are still able to edit those resources through the Glean UI. When you re-deploy these resources with your DataOps configuration files, any changes made through the Glean UI will be overwritten. Overview To use DataOps, you define one or more of your Glean resources using configuration files. A configuration file contains a complete specification of a resource. For instance, a Model configuration file contains information about which database connection to use, the name of the underlying database table, a list of attributes and metrics, etc. Using a set of configuration files, you create a Build. There are two different kinds of Builds: - A Preview Build validates your configuration files and, if successful, provides a URL that will show you what your Glean Project will look like if your pending changes are applied. - A Deploy Build validates your configuration files and, if successful, publishes those changes to your Glean Project. You can create a Build using the Glean command-line interface (CLI), or through the Glean web application. Glean integrates with Git and can create Builds using specific Git branches or revisions that have been pushed to your repository. Getting Started Configuring Git credentials Glean needs to be granted access to your Git repository in order to create Builds from specific revisions. To configure your Git credentials: - Navigate to the Settingspage using the link on the navigation side bar - Click on Version Control - Enter your connection settings for the git repository - We recommend you use an access token to restrict access to just appropriate resources: - The Name field is optional and will just help users identify in plain language what repository you configured for your project, eg: "engineering data pipeline repo" - The default branch will be used as the default for builds, usually "main", "master" or "production" - The default path describes the root of your Glean credentials directory within the repo - Click the 🗼Testbutton to test your git credentials - Click Save Credentials
https://docs.glean.io/docs/data-ops/
2022-05-16T09:35:36
CC-MAIN-2022-21
1652662510097.3
[]
docs.glean.io
Displays and sets properties for revisions. The Revision tab is always read-write (subject to user permissions). Action Specifies the available actions when accessing Properties on a single component, multiple components, or multiple documents. Append Record Creates a new revision record for each document under the selected components or in the document set. You can type a value for the next revision mark or let the software automatically increment it for you. Edit Last Record Edits the last revision for each document under the selected components or in the document set. Only the edited revision fields overwrite the corresponding fields on the last revision record. To clear a populated revision field, type a single space character, and no other characters, in the edited field. The Append Record and Edit Last Record options are not available for a model registered with SmartPlant Foundation or when revising a single document. Revision Mark Specifies the current revision. For single documents, double-click the New Record cell to automatically increment to the next revision mark number. To manually type a value for the next revision mark, click the New Record cell and type the value. This only applies when the model has not been registered with SmartPlant Foundation. If this cell is not edited, then the revision mark number automatically increments to the next available number in each writeable document associated to the selected set. Revision Minor Number Specifies the minor revision number for the revision. Description Specifies the scope of the revision. Revised By Specifies the person who made the revision. Revision Date Specifies the date of the revision. Check Specifies the person who checked the revision. Check Date Specifies the date the revision was checked. Approved By Specifies the person who approved the revision. Approval Date Specifies the date the revision was approved. The appearance and behavior of the contents of this tab differ depending on whether properties are accessed on a single document or accessed on a single component, multiple components, or multiple documents. You can add your own custom revision properties to your drawings by bulkloading the Sample Custom Revision Properties for Bulkload.xls workbook into your database. This workbook is located in the [Product Folder]\CatalogData\BulkLoad\SampleDataFiles folder. The properties will then display in this table. See Sample Custom Revision Properties for Bulkload Workbook and Place a custom revision property on a border. The contents of this tab also depend on whether the model is registered to SmartPlant Foundation. Unregistered If you access Properties on a single document and your model has not been registered to SmartPlant Foundation, the Revision tab displays previous entries made. A new row is available to make a new entry. You can edit each field using alphanumeric and special characters. If you access Properties on a single component, multiple components, or multiple documents and your model has not been registered to SmartPlant Foundation, the Revision tab has a single blank row for a new or edited entry. All fields are editable. Their values are propagated to the writeable documents that are associated with the selected set. Registered If your model has been registered to SmartPlant Foundation, use the Revise command to create revision numbers. This command reserves a revision number by adding it to the document Revision properties. The revision number is added in the form of a blank row on the Revision tab of the Properties dialog. After reserving the revision number, right-click the document and select Properties. Go to the Revision tab and edit the Revision fields. All fields except for Revision Mark and Revision Minor Number are editable. See Revising. You can create more than one revision per instance of the Properties Dialog by selecting Apply after adding a record. You can delete one or more revision records by highlighting the revision rows and pressing DELETE. You must select OK or Apply to make the deletion permanent. The rows selected for deletion must be adjacent and must include the last revision record.
https://docs.hexagonppm.com/r/en-US/Intergraph-Smart-3D-Reports/Version-2016-11.0/889153
2022-05-16T08:24:53
CC-MAIN-2022-21
1652662510097.3
[]
docs.hexagonppm.com
Timeline - Zoom The Zoom options on the Timeline allow you to view your narrative from three points of view: Large, Medium, and Small. What this Article Covers Getting Started You can toggle the Zoom options on the Timeline by clicking the "L", "M", and "S" buttons on the toolbar – the Timeline displays in Large mode by default: Large Zoom The "Large" zoom is the default view of the Timeline, and is ideal for starting your project. Click the "L" button on the Timeline toolbar to view your project in Large Zoom mode: Medium Zoom The "Medium" zoom view of the Timeline is ideal for scanning your project while it's in progress. Click the "M" button on the Timeline toolbar to view your project in Medium Zoom mode: Small Zoom The "Small" zoom view of the Timeline is ideal for viewing large projects at a glance. Click the "S" button on the Timeline toolbar to view your project in Small Zoom mode: Additional Details: - The colored dots in Small Zoom indicate existing Scene Cards - You can hover over each colored dot to view the Scene Card details - You can drag and drop Chapters, Plotlines, and Scene Cards in Small Zoom Known Limitations - You can't currently manually adjust Zoom by percentages
https://docs.plottr.com/article/288-timeline-zoom
2022-05-16T09:19:39
CC-MAIN-2022-21
1652662510097.3
[]
docs.plottr.com
Timeline - Flip The Flip options on the Timeline allow you to toggle your narrative between Vertical (Default) and Horizontal view mode. What this Article Covers Overview You can toggle the Flip options on the Timeline by clicking the "Flip" button on the toolbar – the Timeline displays in Vertical mode by default: Vertical View Mode The "Vertical" view is the default view of the Timeline, and displays your Chapter headings across the top and your Plotline titles down the side of the screen. The Vertical mode is identified by the three dots to the left of the "Flip" button being vertical: Horizontal View Mode The "Horizontal" view of the Timeline displays your Plotline titles across the top and your Chapter headings down the side of the screen. Click the "Flip" button on the Timeline toolbar to view your project in Horizontal view – identified by the three dots to the left of the "Flip" button being vertical: Known Limitations - You can't currently add Templates to the Timeline in Horizontal Mode
https://docs.plottr.com/article/305-timeline-flip
2022-05-16T09:40:58
CC-MAIN-2022-21
1652662510097.3
[]
docs.plottr.com
The sales order listing report helps you view the listing of a particular sales order. Viewing the Sales Order Listing To view the sales order listing, go to Inventory > Reports > Sales > Sales Order Listing, the sales order listing report is displayed. There are various filters available you can enable for your report. There are two radio buttons out of which you can select one at a time. Orders: You can select the quotation number from a particular quotation number to a particular quotation number. From: Starting document number To: Ending document number Based on Date: You can select the quotations listing based on the date filter. Date from: This field sets the starting date. Date to: This field sets the ending date. There are three checkboxes that you can select regardless of the other. Customer: This checkbox is checked to select any particular customer. Warehouse: This checkbox is checked to select any particular warehouse. Salesman: This checkbox is checked to select any particular salesman. Note: When any checkbox is unchecked from above, for example, customer, all entries of the customer against warehouse and salesman will be shown. Show Items: This checkbox when checked shows the relevant items. There are three radio buttons in front of it, out of which you can select one at a time. You can select “All” to include all the types of items. “Measurements only” means the selection of dimensional items and “Size/Color only” means the selection of size/colour items. Show Documents: This checkbox when checked shows the relevant documents. Select All: This checkbox is checked to select all the options otherwise you can also manually select the options of report information. The other checkboxes are listed below: Primary Phone, Secondary Phone, Email 1, Email 2, Fax 1, Fax 2, VAT/Tax ID, C.C.R Number, Post Box, Country, City, ZIP Code, Address. Note: Enabling any checkbox allows that field to be shown.
https://docs.smacc.com/ar/sales-order-listing/
2022-05-16T09:15:19
CC-MAIN-2022-21
1652662510097.3
[]
docs.smacc.com
Set up alerts - Open your chart layout - Click the clock in the top right - Press "Create alert" - Click the condition box and pick an indicator - Choose an alert condition - Select how often you want to be notified when your alert gets triggered (Recommended: Once Per Bar Close) - Choose an expiration time or select "Open-ended" - Configure how you want to be notified once your alert gets triggered - Set a fitting alert name and message - Click on "Create" and you can now see your custom alert under "Alerts"
https://docs.whalecrew.com/setup/alerts/
2022-05-16T09:10:02
CC-MAIN-2022-21
1652662510097.3
[]
docs.whalecrew.com
Minting with royalties Royalties for a given asset are defined at the point of minting a new asset to the Immutable X protocol. Please ensure your recipients are registered before minting. Royalty fees for newly minted assets Minting with royalties requires @imtbl/imx-sdk version that is >= 1.1.3. To set up your minter, refer to this minting example. The main difference between the example above and the one below is the object structure of the mints. The previous example does not support fees and will be deprecated and replaced with the example below. const result = await minter.mintV2([ { "contractAddress": "0xc6185055ea9891d5d9020c927ff65229baebdef2", "royalties": [ // global fees { "recipient": "0xA91E927148548992f13163B98be47Cf4c8Cb3B16", "percentage": 2.5 } ], "users": [ { "etherKey": "0xc3ec7590d5970867ebd17bbe8397500b6ae5f690", "tokens": [ { // ERC-721 "id": "1", "blueprint": "my-on-chain-metadata", "royalties": [ // override global fees on a per-token basis { "recipient": "0xc3ec7590d5970867ebd17bbe8397500b6ae5f690", "percentage": 2.5 } ], } ] }, { "etherKey": "0xA91E927148548992f13163B98be47Cf4c8Cb3B16", "tokens": [ { // ERC-721 "id": "", "blueprint": "" } ] }, ... ] } ]); // Returns { [ { "contract_address": string; "token_id": string; "tx_id": number; }, { "contract_address": string; "token_id": string; "tx_id": number; }, .... ] } Note: Values are only used for indicating format Important notes users.etherKey represents a valid ethereum wallet address that the token will be minted to. id The id for the asset on your system. This id is used in conjunction with the metadata endpoint provided during contract registration to fetch metadata associated with that asset. blueprint is on chain metadata that will be included as part of the Layer 1 mint if the minted ERC-721 token is withdrawn from Immutable X onto Layer 1 Ethereum. Right now this can be any string as long as it is not empty. You can specify the percentage up to 2 decimal places. Anything beyond that will be truncated to 2 d.p. This is to ensure that rounding up doesn't cause the sum of all constituent percentages to exceed 100%. Viewing the asset royalty fees You can view the royalty information for a given asset via the assets api. This will return the royalty recipient and fee percentage for each royalty associated with an asset.. Updated about 1 month ago
https://docs.x.immutable.com/docs/minting-with-royalties
2022-05-16T09:31:33
CC-MAIN-2022-21
1652662510097.3
[]
docs.x.immutable.com
The documentation you are viewing is for Dapr v1.3 which is an older version of Dapr. For up-to-date documentation, see the latest version. Zeebe JobWorker binding spec 配置 To setup Zeebe JobWorker binding create a component of type bindings.zeebe.jobworker. 请参阅本指南,了解如何创建和应用绑定配置。.
https://v1-3.docs.dapr.io/zh-hans/reference/components-reference/supported-bindings/zeebe-jobworker/
2022-05-16T08:22:58
CC-MAIN-2022-21
1652662510097.3
[]
v1-3.docs.dapr.io
Python is a mature programming language which has established a reputation for stability. In order to maintain this reputation, the developers would like to know of any deficiencies you find in Python or its documentation. All bug reports should be submitted via the Python Bug Tracker on SourceForge (). The bug tracker offers a Web form which allows pertinent information to be entered and submitted to the developers. Before submitting a report, please log into SourceForge if you are a member; this will make it possible for the developers to contact you for additional information if needed. If you are not a SourceForge member but would not mind the developers contacting you, you may include your email address in your bug description. In this case, please realize that the information is publically available and cannot be protected. (such as ``Documentation'' or ``Library''). Each bug report will be assigned to a developer who will determine what needs to be done to correct the problem. If you have a SourceForge account and logged in to report the problem, you will receive an update each time action is taken on the bug. See Also: See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://docs.python.org/release/2.1.2/lib/reporting-bugs.html
2012-05-27T02:37:41
crawl-003
crawl-003-018
[]
docs.python.org
#include "std.h" #include "subsystems/sensors/baro.h" #include "mcu_periph/adc.h" #include "mcu_periph/dac.h" Go to the source code of this file. Definition at line 42 of file baro_board.h. Definition at line 53 of file baro_board.h. Definition at line 37 of file baro_board.h. 55 of file baro_board.h. References baro_board, DACSet(), and BaroBoard::offset. Definition at line 63 of file baro_board.c. Referenced by baro_board_calibrate(), baro_board_SetOffset(), baro_init(), baro_periodic(), and lisa_l_baro_event().
http://docs.paparazziuav.org/latest/booz_2baro__board_8h.html
2019-12-06T03:05:40
CC-MAIN-2019-51
1575540484477.5
[]
docs.paparazziuav.org
Robust Statistical Estimators¶ Robust statistics provide reliable estimates of basic statistics for complex distributions. The statistics package includes several robust statistical functions that are commonly used in astronomy. This includes methods for rejecting outliers as well as statistical description of the underlying distributions. In addition to the functions mentioned here, models can be fit with outlier rejection using FittingWithOutlierRemoval(). Sigma Clipping¶ Sigma clipping provides a fast method to identify outliers in a distribution. For a distribution of points, a center and a standard deviation are calculated. Values which are less or more than a specified number of standard deviations from a center value are rejected. The process can be iterated to further reject outliers. The astropy.stats package provides both a functional and object-oriented interface for sigma clipping. The function is called sigma_clip() and the class is called SigmaClip. By default, they both return a masked array where the rejected points are masked. First, let’s generate some data that has a mean of 0 and standard deviation of 0.2, but with outliers: >>> import numpy as np >>> import scipy.stats as stats >>> np.random.seed(0) >>> x = np.arange(200) >>> y = np.zeros(200) >>> c = stats.bernoulli.rvs(0.35, size=x.shape) >>> y += (np.random.normal(0., 0.2, x.shape) + ... c*np.random.normal(3.0, 5.0, x.shape)) Now, let’s use sigma_clip() to perform sigma clipping on the data: >>> from astropy.stats import sigma_clip >>> filtered_data = sigma_clip(y, sigma=3, maxiters=10) The output masked array then can be used to calculate statistics on the data, fit models to the data, or otherwise explore the data. To perform the same sigma clipping with the SigmaClip class: >>> from astropy.stats import SigmaClip >>> sigclip = SigmaClip(sigma=3, maxiters=10) >>> print(sigclip) <SigmaClip> sigma: 3 sigma_lower: None sigma_upper: None maxiters: 10 cenfunc: <function median at 0x108dbde18> stdfunc: <function std at 0x103ab52f0> >>> filtered_data = sigclip(y) Note that once the sigclip instance is defined above, it can be applied to other data, using the same, already-defined, sigma-clipping parameters. For basic statistics, sigma_clipped_stats() is a convenience function to calculate the sigma-clipped mean, median, and standard deviation of an array. As can be seen, rejecting the outliers returns accurate values for the underlying distribution: >>> from astropy.stats import sigma_clipped_stats >>> y.mean(), np.median(y), y.std() (0.86586417693378226, 0.03265864495523732, 3.2913811977676444) >>> sigma_clipped_stats(y, sigma=3, maxiters=10) (-0.0020337793767186197, -0.023632809025713953, 0.19514652532636906) sigma_clip() and SigmaClip can be combined with other robust statistics to provide improved outlier rejection as well. import numpy as np import scipy.stats as stats from matplotlib import pyplot as plt from astropy.stats import sigma_clip, mad_std # Generate fake data that has a mean of 0 and standard deviation of 0.2 with outliers np.random.seed(0) x = np.arange(200) y = np.zeros(200) c = stats.bernoulli.rvs(0.35, size=x.shape) y += (np.random.normal(0., 0.2, x.shape) + c*np.random.normal(3.0, 5.0, x.shape)) filtered_data = sigma_clip(y, sigma=3, maxiters=1, stdfunc=mad_std) # plot the original and rejected data plt.figure(figsize=(8,5)) plt.plot(x, y, '+', color='#1f77b4', label="original data") plt.plot(x[filtered_data.mask], y[filtered_data.mask], 'x', color='#d62728', label="rejected data") plt.xlabel('x') plt.ylabel('y') plt.legend(loc=2, numpoints=1) () Median Absolute Deviation¶ The median absolute deviation (MAD) is a measure of the spread of a distribution and is defined as median(abs(a - median(a))). The MAD can be calculated using median_absolute_deviation. For a normal distribution, the MAD is related to the standard deviation by a factor of 1.4826, and a convenience function, mad_std, is available to apply the conversion. Note A function can be supplied to the median_absolute_deviation to specify the median function to be used in the calculation. Depending on the version of numpy and whether the array is masked or contains irregular values, significant performance increases can be had by pre-selecting the median function. If the median function is not specified, median_absolute_deviation will attempt to select the most relevant function according to the input data. Biweight Estimators¶ A set of functions are included in the astropy.stats package that use the biweight formalism. These functions have long been used in astronomy, particularly to calculate the velocity dispersion of galaxy clusters 1. The following set of tasks are available for biweight measurements: astropy.stats.biweight Module¶ This module contains functions for computing robust statistics using Tukey’s biweight function. References¶ - 1 Beers, Flynn, and Gebhardt (1990; AJ 100, 32) (….100…32B)
https://docs.astropy.org/en/stable/stats/robust.html
2019-12-06T04:20:55
CC-MAIN-2019-51
1575540484477.5
[array(['../_images/robust-1.png', '../_images/robust-1.png'], dtype=object)]
docs.astropy.org
Create Amazon Connect Contact Flows A contact flow defines the customer experience with your contact center from start to finish. Amazon Connect includes a set of default contact flows so you can quickly set up and run a contact center. However, you may want to create custom contact flows for your specific scenario. Contents
https://docs.aws.amazon.com/connect/latest/adminguide/connect-contact-flows.html
2019-12-06T04:08:01
CC-MAIN-2019-51
1575540484477.5
[]
docs.aws.amazon.com
File sharing and management¶ - File Sharing - Configuring Federation Sharing - Uploading big files > 512MB - Providing default files - Configuring Object Storage as Primary Storage - Configuring External Storage (GUI) - Configuring External Storage (configuration file) - - Transactional file locking - Previews configuration - Controlling file versions and aging
https://docs.nextcloud.com/server/17/admin_manual/configuration_files/index.html
2019-12-06T03:17:40
CC-MAIN-2019-51
1575540484477.5
[]
docs.nextcloud.com
On this page: Related pages: The Crash Dashboard aggregates mobile application crash data over time, using the Events Service. This service collects and stores all the data collected by the mobile agent. The Crash Dashboard is divided into two panels and has the App Version dropdown that enables you to view crash data for different versions of your application. Summary Crash Trend This panel displays a running total of crashes, unique crashes, impacted users, crash rate, and crash trends. The crash trends is a timeline of crash rates. It also reports any iOS crash reports that were uploaded without the accompanying dSYM file. For more information on how to use dSYM files with crashing reports, see Get Human-Readable Crash Snapshots. Unique Crashes Multiple crashes can be caused by the same underlying code issue. The Unique Crashes panel displays a list of crashes grouped by common characteristics and displays basic information about the crash. You can view open, closed, or all crashes. Unique Crash Details To see more detail per crash, click the crash that interests you, in blue. The dashboard for that crash appears, with a header, trend bar graph, crash distribution charts, and a snapshot of the crucial details common to all the crash snapshots. See the section Crash Summary on the Crash Analyze page for more information. Crash Status In addition to viewing crash details, from the Unique Crashes panel, you can select a unique crash, click Actions, and set the status to either open or closed. You can set the status to mark those unique crashes that you want to ignore or have fixed the root cause of. When a crash is marked as closed, the crash will no longer trigger a new crash event, so you will not see the crash in the Events widget of the Mobile App Dashboard, the Events page, and it won't be included in alerts for new crashes.
https://docs.appdynamics.com/display/PRO45/Crash+Dashboard
2019-12-06T04:12:34
CC-MAIN-2019-51
1575540484477.5
[]
docs.appdynamics.com
Access to a Peered VPC The configuration for this scenario includes a single VPC and an additional VPC that is peered with the target VPC. We recommend this configuration if you need to give clients access to the resources inside a target VPC and other VPCs that are peered with it. To implement this configuration Ensure that you have a VPC with at least one subnet. Identify the subnet in the VPC that you want to associate with the Client VPN endpoint and note its IPv4 CIDR ranges. For more information, see VPCs and Subnets in the Amazon VPC User Guide. Ensure that the VPC's default security group allows inbound and outbound traffic to and from your clients. For more information, see Security Groups for Your VPC in the Amazon VPC User Guide. Establish the VPC peering connection between the VPCs. Follow the steps at Creating and Accepting a VPC Peering Connection in the Amazon VPC User Guide. Test the VPC peering connection. Confirm that instances in either VPC can communicate with each other as if they are within the same network. If the peering connection works as expected, continue to the next step. Create a Client VPN endpoint in the same region as the VPC identified in Step 1. Perform the steps described in Create a Client VPN Endpoint. Associate the subnet you identified earlier with the Client VPN endpoint that you created. To do this, perform the steps described in Associate a Target Network with a Client VPN Endpoint and select the subnet and the VPC. Add an authorization rule to give clients access to the VPC. To do this, perform the steps described in Add an Authorization Rule to a Client VPN Endpoint, and for Destination network to enable , enter the IPv4 CIDR range of the VPC. Add a route to direct traffic to the peered VPC. To do this, perform the steps described in Create an Endpoint Route; for Route destination, enter IPv4 CIDR range of the peered VPC, and for Target VPC Subnet ID, select the subnet you associated with the Client VPN endpoint. Add an authorization rule to give clients access to peered VPC. To do this, perform the steps described in Add an Authorization Rule to a Client VPN Endpoint; for Destination network, enter IPv4 CIDR range of the peered VPC, and for Grant access to, select Allow access to all users.
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/scenario-peered.html
2019-12-06T02:44:18
CC-MAIN-2019-51
1575540484477.5
[]
docs.aws.amazon.com
Content and Experience Oracle Content and Experience is a cloud-based content hub to drive omni-channel content management and accelerate experience delivery. It offers powerful collaboration and workflow management capabilities to streamline the creation and delivery of content and improve customer and employee engagement. - Overview of Oracle Content and Experience - Purchase and Activate an Oracle Cloud Subscription - Create Your Service Instance - Set Up Users and Groups - What to Do Next - Monitor the Service Related Content - What's New - Administering Content and Experience - Developing for Content and Experience - Managing Content - Creating Experiences
https://docs.cloud.oracle.com/iaas/Content/content-and-experience/index.html
2019-12-06T03:13:14
CC-MAIN-2019-51
1575540484477.5
[]
docs.cloud.oracle.com
Deploying Contracts Know how to deploy contracts on Ethereum (Base) and Matic (Child chain) When do I need to deploy contracts?¶ For running a private testnet or single validator version of Matic we need to deploy plasma contracts. However for connecting to a public testnet you just need to fetch the addresses from the public-testnets repo. Installing NodeJS¶ To move further we need to install nodejs which is an open-source runtime javascript execution environment using nvm like explained here. Installing Truffle¶ To deploy contracts you need to install truffle like explained here. Clone the contracts repo¶ $ git clone $ cd contracts Install dependencies¶ $ npm i Deploying contracts¶ We need to install 2 set of contracts, one on base chain and one on side-chain, the instructions to do that can be found here. We need to deploy our set of contracts on 2 chains: Base Chain: Ideally a higher security EVM chain which can be used for dispute resolution. For testing ganache or any other EVM chain should work. Child Chain: EVM compatible chain to work as our side-chain. For testing note that using ganachefor child-chain is not recommended, instead running npm run simuate-borwould be better. Step 1: Deploy root contracts on base chain¶ - Do make sure that the dependencies are installed and that you have cloned the contracts repo as stated above in this document. $ export HEIMDALL_ID="<your heimdall ID>" // From the contracts repo, do the following $ npm run truffle:compile $ mv migrations dev-migrations && cp -r deploy-migrations migrations Root contracts are deployed on the base chain. Base chain can be your own ganache or testnets like rinkeby, ropsten. If you're running it locally, npm run testrpcwill bring a local test blockchain up to function as basechain. Modify truffle-config.jsto configure base chain. $ npm run truffle:migrate -- --reset --network <base_chain_network_name> --to 3 Post successful deployment all contract addresses will be written to a contractAddresses.json file. Step 2: Deploy contracts on Bor¶ // Contracts like ChildERC20Token are deployed on child chain aka BOR chain // NOTE: You need to deploy or simulate BOR before running the below command // Modify truffle-config.js to configure bor chain */ bor: { host: 'localhost', port: 8546, network_id: '*', // match any network skipDryRun: true, gas: 7000000 } /* $ npm run truffle:migrate -- --reset --network <child_chain_network_name> -f 4 --to 4 Step 3: Link contracts on Bor with contracts on base chain¶ // Contracts deployed on BOR are mapped to the registry contract deployed on-chain $ npm run truffle:migrate -- --network <base_chain_network_name> -f 5 --to 5 Post successful deployment all contract addresses will be written to a contractAddresses.json file. Check your ether balance on base chain before deploying contracts. Almost Done!¶ The contractAddresses.json file should be stored somewhere where you can read it again because we need to add contract addresses to heimdall-config.
https://docs.matic.network/staking/validator-contracts/deploying-contracts/
2019-12-06T04:27:56
CC-MAIN-2019-51
1575540484477.5
[]
docs.matic.network
Portal: How to view learning plans - Click on the Learning menu and then click on the Learning Plans learning plans. - A list of all of the learning plans that your school has shared with you is shown. Learning plans are ordered by date with the most recent one at the top. - Click on or tap a plan in the list to view it. Parents and students are only able to view the final learning plan and are not able to edit a plan.
https://docs.xuno.com.au/article/265-portal-how-to-view-learning-plans
2019-12-06T03:00:27
CC-MAIN-2019-51
1575540484477.5
[]
docs.xuno.com.au
Define resources in Azure Resource Manager templates When creating Resource Manager templates, you need to understand what resource types are available, and what values to use in your template. The Resource Manager template reference documentation simplifies template development by providing these values. If you are new to working with templates, see Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal for an introduction to working with templates. To determine locations that available for a resource type, see Set location in templates. To add tags to resources, see Tag resources in Azure Resource Manager templates. If you know the resource type, you can go directly to it with the following URL format:{provider-namespace}/{resource-type}. For example, the SQL database reference content is available at: The resource types are located under the Reference node. Expand the resource provider that contains the type you are looking for. The following image shows the types for Compute. Or, you can filter the resource types in navigation pane:
https://docs.microsoft.com/ja-jp/azure/templates/
2019-12-06T03:43:40
CC-MAIN-2019-51
1575540484477.5
[array(['media/index/show-compute-types.png', 'show resource types'], dtype=object) array(['media/index/filter-types.png', 'filter resource types'], dtype=object) ]
docs.microsoft.com
Your process up-time targets demand a rapid response in either planned or unplanned down situations. We maintain a comprehensive inventory of service exchange product, built and tested to as-new performance specifications in our service centres. Call us, and we’ll ship you the exchange product and you simply return the original product to us – fast, simple and with minimum downtime!
http://docs.edwardsvacuum.com/Exchange/
2019-12-06T03:55:43
CC-MAIN-2019-51
1575540484477.5
[]
docs.edwardsvacuum.com
- YouTube Source Members of the Administrators and Content Managers built-in groups can add YouTube brand or user channel content to a Coveo Cloud organization. The source can be shared or private (see Content Security). The YouTube source does not completely support refreshes. A source rescan or rebuild is required to take account of the following content changes: Deleted videos Changes on videos Statistics on videos (e.g., view counts, likes, …) updates Playlist and playlist item updates You should thus configure the source schedule to start a rescan at least once a day (see Edit a Source Schedule). However, a YouTube source starts a refresh every day to retrieve YouTube item changes (addition, modification, or deletion). Source Features Summary Add or Edit a YouTube Source If not already in the Add/Edit a YouTube Source panel, go to the panel: To add a source, in the main menu, under Content, select Sources > Add source button > YouTube.). CompanyXYZ-YouTube-Channel User, channel, or playlist URL The website address of one or more YouTube user or brand channels, or playlists. Include playlists Select the check box when you want YouTube channel playlists and playlist items to be included. This option is not used when you specify a playlist URL in the User, Channel, or Playlist URL parameter. YouTube refreshed every day.
https://docs.coveo.com/en/1637/
2019-12-06T03:58:01
CC-MAIN-2019-51
1575540484477.5
[]
docs.coveo.com
Jenkins plugin Important: This topic describes using a CI tool plugin to interact with XL XL Release plugin for Jenkins at the global and job levels. Global Jenkins configuration Manage the global Jenkins configuration by navigating to Manage Jenkins > Configure System. You can specify the XL Release server URL and one or more sets of credentials. Different credentials can be used for different jobs. Job configuration In the job configuration page, select Post-build Actions > Add post-build action > Release with XL XL XL XL Release plugin for Jenkins, see the XebiaLabs XL XL: - xl-release } } Release notes XL Release plugin when the version parameter is used Bug fixes - REL-4280 - Variable names set in Jenkins post-build action are overwritten by first variable in list - REL-4282 - Jenkins XL.
https://docs.xebialabs.com/v.9.0/xl-release/how-to/using-the-xl-release-plugin-for-jenkins/
2019-12-06T04:20:06
CC-MAIN-2019-51
1575540484477.5
[array(['/static/jenkins_plugin_config-95963093d1b5e2c8228aa943fdb9c2b2.png', 'XL Release plugin - global configuration'], dtype=object) array(['/static/jenkins_job_config-798450aeec1fa5a70d2a0700dee3eb7c.png', 'XL Release plugin - select a job'], dtype=object) array(['/static/jenkins_validate_template-bb9f3416ce9bdbea2c2b68fa14934795.png', 'XL Release plugin - validate template'], dtype=object) ]
docs.xebialabs.com
Manage product prices for B2B (tax excluded) and B2C (tax included)¶ When working with consumers, prices are usually expressed with taxes included in the price (e.g., in most eCommerce). But, when you work in a business environment, companies usually negotiate prices without taxes. You can manage. Let’s see how you can manage the product pricing for both customer at once. When you have a retailer customer on eCommerce store you can show the price and send the quotation with tax included in price and for same product you can prepare and send the quotation excluded price for back office sales to the business customer. Configuration¶ The best way to simplify the price by setting the product price as tax excluded by default, so all the price defined on the product are always tax excluded. The other way will be computed automatically. Product Price¶ Make sure that when you define the product price it will always tax excluded, and apply the default tax on the product form. Create taxes¶ Create a different taxes with the same percentage 15%, one define as Included in Price and for other one which is tax excluded rename the existing ‘Tax 15.00%’ to ‘Tax 15.00% Exe.’ Price List¶ Create a pricelist that have the product price always tax included. So, If you define the product Laptop price $1250 and default tax is 15% define the product price on price list as $1437.5. Create a Fiscal Position¶ Create a fiscal position that use to swap taxes. When you are selling to wholesale(b2b) customer, the default tax we have applied on the product is always tax excluded but when you sell to retail customer you have to apply the price which is tax included and tax which actually computed the tax included. Create a retail customer (b2c)¶ There are two important fields has to be set correctly when you create a retail customer, Sales Pricelist has to be set to Retail Pricelist (Tax. Inc.) (USD) under the Sales & Purchase tab. The Fiscal Position should be set to Retail Customer under the Accounting and tab. Create a normal customer (b2b)¶ By default all the customer are created are considered as business customers with the default pricelist and tax is applied which is always the tax excluded. Create a test quotation¶ Create a quotation from the Sale application, using the Sales / Quotations menu. Select the Ajay Patel as a customer, sell Laptop product, you should have the following result: 1250€ + 187.50€ = 1437.50€. When you create a quotation for the normal customer which has tax excluded will be looking as below. Tip If you negotiate a contract with a customer, whether you negotiate tax included or tax excluded, you can set the pricelist and the fiscal position on the customer form so that it will be applied automatically at every sale of this customer.
https://odoobooks.readthedocs.io/en/12.0/accounting/taxes/business_to_business_and_customer.html
2019-12-06T03:32:22
CC-MAIN-2019-51
1575540484477.5
[array(['../../_images/image12.png', 'image0'], dtype=object) array(['../../_images/image13.png', 'image1'], dtype=object) array(['../../_images/image9.png', 'image2'], dtype=object) array(['../../_images/image11.png', 'image3'], dtype=object) array(['../../_images/image8.png', 'image4'], dtype=object) array(['../../_images/image7.png', 'image5'], dtype=object) array(['../../_images/image16.png', 'image6'], dtype=object) array(['../../_images/image15.png', 'image7'], dtype=object)]
odoobooks.readthedocs.io
Elder on passage of School Aid budget LANSING — An updated and expanded version of the School Aid budget passed the House by a vote of 91 to 18. In response, state Rep. Brian Elder (D-Bay City), who voted against the budget, issued the following statement: “Our students and educators should be treated as a priority, not an afterthought. The budget passed today is not good enough for my people — or students across Michigan. Funding models that don’t keep pace with inflation are not a win, and I stand by my governor’s budgetary recommendations, and alongside many of my fellow Democratic colleagues in rejecting this budget. When we don’t invest in our students we set them and our schools up for failure, ultimately jeopardizing our chance at success in the long run. Michiganders deserve more than that.” ###
http://docs.housedems.com/article/elder-passage-school-aid-budget
2019-12-06T03:13:29
CC-MAIN-2019-51
1575540484477.5
[]
docs.housedems.com
Triggers are the auto-populated message, that will be displayed to the selected group of audience on your website. Trigger is an auto-populated message shown to the customer on website via Web or a Mobile channel. To show the Auto-mated message to the visitors who are online on our website Web Campaign/Trigger would be used. Create Mobile Triggers from the triggers tab with Acquire Signup today or request a demoGet Started Now Book a demo
https://docs.acquire.io/category/how-to-setup-triggers
2019-12-06T03:03:56
CC-MAIN-2019-51
1575540484477.5
[]
docs.acquire.io
On this Page: Related pages: On this Page: Related pages: - Key Performance Indicator metrics only (KPI mode) - KPI and Diagnostic metrics (Diagnostic mode) - All available metrics (Advanced Diagnostic mode) This provides the flexibility to report KPI metrics only on most machines and then increase the metric level on specific servers where you need deeper visibility to diagnose problems. You can increase scalability on the Controller and conserve metric bandwidth on the network with no sacrifice in visibility. Every Basic and Server Visibility metric has a default Dynamic Monitoring Mode (DMM) class: KPI, Diagnostic, and Advanced Diagnostic. To see the DMM class for each metric, see Hardware Resources Metrics. Important Notes Note the following: - You can disable DMM on individual agents. When DMM is disabled on an agent, that agent will report all metrics regardless of whether DMM is enabled or disabled on the Controller. Disabling DMM on an agent is recommended only for mission-critical servers and other machines for which you are sure you want to collect all metrics. For more information, see "Enable Dynamic Monitoring Mode (DMM)" under Standalone Machine Agent Configuration Properties. - If you switch the monitoring mode in the Controller from a more-inclusive to a less-inclusive mode, the Metric Browser will show values for the newly-excluded metrics for one hour after the switch. For example: suppose you switch from Diagnostic to KP mode. For any Diagnostic metric, the Metric Browser will report a steady line (at 0 or the last-reported value) for one hour after the switch; then the line will disappear. This is standard behavior in the Metric Browser for an agent when it stops reporting a specific metric. - Each Machine Agent has a set of configurable settings that specify the volumes, networks, and processes to report and to ignore. For example, you can define a process whitelist (report on matching processes) and blacklist (ignore matching processes). An agent will not report metrics for items excluded by its local settings, regardless of the monitoring mode. For more information, see Machine Agent Settings for Server Visibility. - If you have any custom health rules based on Diagnostic or Advanced Diagnostic metrics, DMM might cause these rules to generate "false-positive" alerts. (Standard, non-custom health rules are not affected.) If you have any health rules like this, the workaround is to edit the rule to use a KPI metric in place of the Diagnostic or Advanced Diagnostic metric. Initial Setup DMM is enabled by default. To configure DMM, do the following: Log in to the Controller administration console using the root user password. For more information, see Access the Administration Console. http://<controller host>:<port>/controller/admin.jsp - Set the following options: - sim.machines.dmm.defaultMode = KPI (Sets the default mode to KPI on all Standalone Machine Agents). - sim.machines.dmm.dmmAllowed = true (Enables Dynamic Monitoring Mode on the Controller) - For servers of interest that require additional visibility, increase the monitoring mode to Diagnostic or Advanced Diagnostic. To change the monitoring mode of one or more agents, do the following: - Log in to the Controller UI and click Servers. - In the Servers table, select the servers whose monitoring mode you want to change. - Right-click the selection, choose Select Dynamic Monitoring Mode and select the new mode. Example Workflow After you complete the initial setup, you can set the Dynamic Monitoring Mode on individual machine agents as needed. An example workflow might look like the following: - The DevOps team for a large enterprise monitors its IT infrastructure using Standalone Machine Agents (1,000-plus agents monitoring servers in hundreds of locations). All agents are initially set to KPI mode. - One agent reports a lot of disk read/write operations (KPI metric) on critical-server-A. - Set the agent DMM on critical-server-A to Diagnostic. - Monitor the amount of data read and written for the entire disk and for each partition (Diagnostic metrics) on critical-server-A. - If the Diagnostic metrics do not indicate the source of the problem, and further investigation is needed, set the agent DMM to Advanced Diagnostic. - Monitor average queue times and read/write times for each partition (Advanced Diagnostic metrics) on critical-server-A. - When advanced diagnostics are no longer required on critical-server-A, set the agent DMM back to KPI.
https://docs.appdynamics.com/display/PRO45/Dynamic+Monitoring+Mode+and+Server+Visibility
2019-12-06T03:04:17
CC-MAIN-2019-51
1575540484477.5
[]
docs.appdynamics.com
ServiceQuota A structure that contains the full set of details that define the service quota. Contents - Adjustable Specifies if the quota value can be increased. Type: Boolean Required: No - ErrorReason Specifies the ErrorCodeand ErrorMessagewhen success isn't achieved. Type: ErrorReason object Required: No - GlobalQuota Specifies if the quota is global. Type: Boolean Required: No - Period Identifies the unit and value of how time is measured. Type: QuotaPeriod object Required: No - QuotaArn The Amazon Resource Name (ARN) of the service quota. Type: String Required: No - QuotaCode The code identifier for the service quota specified. Type: String Length Constraints: Minimum length of 1. Maximum length of 128. Pattern: [a-zA-Z][a-zA-Z0-9-]{1,128} Required: No - QuotaName The name identifier of the service quota. Type: String Required: No - ServiceCode Specifies the service that you want to use. Type: String Length Constraints: Minimum length of 1. Maximum length of 63. Pattern: [a-zA-Z][a-zA-Z0-9-]{1,63} Required: No - ServiceName The name of the AWS service specified in the increase request. Type: String Required: No - Unit The unit of measurement for the value of the service quota. Type: String Required: No - UsageMetric Specifies the details about the measurement. Type: MetricInfo object Required: No - Value The value of service quota. Type: Double Valid Range: Minimum value of 0. Maximum value of 10000000000. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/servicequotas/2019-06-24/apireference/API_ServiceQuota.html
2019-12-06T03:58:26
CC-MAIN-2019-51
1575540484477.5
[]
docs.aws.amazon.com
How to view Parent Teacher Interviews reports Use the Parent Teacher Interviews report to view the booking status of your staff and students. - Click on the Administration menu and then click on Parent Teacher Interviews under the Reports section. - Select Students or Teachers from the Report drop down list. - To see additional filtering options, click on the Filter Icon at the top of the page and make your selections. - Click on the Show button to view your results.
https://docs.xuno.com.au/article/295-how-to-view-parent-teacher-interviews-report
2019-12-06T03:15:15
CC-MAIN-2019-51
1575540484477.5
[]
docs.xuno.com.au
Lasinski on passage of auto insurance legislation:. Donna Lasinski (D-Scio Township) issued the following statement: “I was proud to be Vice Chair of the House Select Committee on Reducing Auto Insurance Rates, and remain proud of the bipartisan, transparent work we did on that committee to deliver fair, affordable reform for Michigan drivers. This legislation does not deliver on that promise, and that’s why I voted no. My constituents deserve reform that actually protects them from non-driving factors, and they deserve a transparent process to deliver that reform. We all need guaranteed rate relief that is not an average across the state, but a real reduction for each and every driver.” ###
http://docs.housedems.com/article/lasinski-passage-auto-insurance-legislation
2019-12-06T04:07:31
CC-MAIN-2019-51
1575540484477.5
[]
docs.housedems.com
Restricting MongoDB access by enabling authentication Follow this procedure for a stand-alone environment only (not when running MongoDB as a replica set). Video demonstration The following video (4:10) demonstrates how to restrict MongoDB access by enabling authentication and store the encrypted MongoDB password in the configuration file: To restrict MongoDB access by enabling authentication Log on to the MongoDB shell and enter the following commands: use admin db.createUser( { user: "siteUserAdmin", pwd: "<siteUserAdminPassword>", roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] }); use social db.createUser( { user: "social_admin", pwd: "<social_adminPassword>", roles: [ { role: "dbOwner", db: "social" } ] }); - Enable authentication by using either of the following methods: - Start the mongodprocess by using the --authoption. - In the mongoconfiguration, set auth = trueand restart the mongo service. To connect the BMC Digital Workplace or Smart IT social service to mongo, change config.js in Smart_IT_MyIT/social to use the following value: - Restart the social service. Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/digitalworkplacebasic/35/restricting-mongodb-access-by-enabling-authentication-818719894.html
2019-12-06T04:26:57
CC-MAIN-2019-51
1575540484477.5
[]
docs.bmc.com
Using the calendar app¶¶ Import a Calendar¶¶¶. You can subscribe to iCal calendars directly inside of your Nextcloud. By supporting this interoperable standard (RFC 5545) we made Nextcloud calendar compatible to Google Calendar, Apple iCloud and many other calendar-servers you can exchange your calendars with. - Click on + New Subscriptionin the left sidebar. - Type in the link of the shared calendar you want to subscribe to. Finished. Your calendar subscriptions will be updated regularly. Managing Events¶ Create a new event¶ events¶ If you want to edit or delete a specific event, you just need to click on it. After that you will be able to re-set all of the events details and open the advanced sidebar-editor by clicking on More.... Clicking on the blue Update-button will update the event. Clicking on the Cancel-button will not save your edits. If you click on the red Delete-button the event will be removed from your calendar. Birthday calendar¶ has disabled this for your server.
https://docs.nextcloud.com/server/14/user_manual/pim/calendar.html
2019-12-06T03:06:01
CC-MAIN-2019-51
1575540484477.5
[array(['../_images/calendar_application.png', '../_images/calendar_application.png'], dtype=object) array(['../_images/calendar_create.gif', '../_images/calendar_create.gif'], dtype=object) array(['../_images/calendar_new-event_week.gif', '../_images/calendar_new-event_week.gif'], dtype=object) array(['../_images/calendar_new-event_month.gif', '../_images/calendar_new-event_month.gif'], dtype=object)]
docs.nextcloud.com
Save the date 04/20/2020 08:00 AM 04/23/2020 06:00 PM false Montreux/Switzerland Our WBCSD Liaison Delegate Meeting will bring together over 400 sustainability professionals from all sectors and geographies to focus on how companies can capture dividends from sustainable business and explore collaborative pathways to drive systems-transformation. You will have the opportunity to meet with your peers and work on transformative solutions that will help scale up business impact while allowing your company to better manage risks and unlock market opportunities. WBCSD Liaison Delegate Meeting 2020 Montreux, Switzerland
https://docs.wbcsd.org/events/Save_the_date/LD2020/wbcsd_liaison_delegate_meeting_2020.html
2019-12-06T03:04:09
CC-MAIN-2019-51
1575540484477.5
[]
docs.wbcsd.org
Lists the values specifying permissions that are used to restrict or allow access to document modification operations. Namespace: DevExpress.Pdf Assembly: DevExpress.Pdf.v19.2.Core.dll public enum PdfDocumentModificationPermissions Public Enum PdfDocumentModificationPermissions Permit document modification and assembling. Allow only document assembling such as inserting, rotating or deleting pages, as well as bookmark creation on the navigation pane. Prohibit document modification and assembling. The values listed by the PdfDocumentModificationPermissions enumeration are used to set the PdfEncryptionOptions.ModificationPermissions property.
https://docs.devexpress.com/OfficeFileAPI/DevExpress.Pdf.PdfDocumentModificationPermissions
2019-12-06T03:27:40
CC-MAIN-2019-51
1575540484477.5
[]
docs.devexpress.com
Routing requests based on message content This is a 5-minute guide to give you a quick overview of how WSO2 EI mediates and routes messages from a front-end service (client) to a back-end service. Let's get started! Additional Capabilities Exposing a datasource as a service Sending messages securely Defining a BPMN process Other WSO2 EI capabilities - In comparison with the conventional ESB profile, the start-up time of the Micro Integrator profile is considerably less, which makes it container-friendly, and thereby, is ideal to use it for microservices in a container-based environment. For more information, try out the Sending a Simple Message to a Service Using the Micro Integrator tutorial. - Work with WSO2 EI connectors. - For all the connectors supported by WSO2 EI, see WSO2 ESB Connectors. - Use the Gmail connector in WSO2 EI by trying out the Using the Gmail Connector tutorial. - All the CAR files that were used in this tutorial were developed using WSO2 EI Tooling. You can try it out by following the tutorials listed under Integration Tutorials. - Use WSO2 EI Analytics to analyze the mediation statistics. For more information, try out the following tutorial: Overview Content Tools Activity
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=92532437&selectedPageVersions=28&selectedPageVersions=29
2019-12-06T04:04:15
CC-MAIN-2019-51
1575540484477.5
[]
docs.wso2.com
Implement custom plugpoints Functionality in the XL Deploy server can be customized by using plugpoints. Plugpoints are specified and implemented in Java. On startup, XL Deploy scans its classpath for implementations of its plugpoints in the com.xebialabs or ext.deployit packages and prepares them for use. There is no additional configuration required. The XL Deploy Server supports the following plugpoints: - Protocol: Specifies a new method for connecting to remote hosts. - Deployment package importer: Used to import deployment packages in a custom format. - Orchestrator: Controls how XL Deploy combines plans to generate the overall deployment workflow. - Event listener: Specifies a listener for XL Deploy notifications and commands. For more information on Java API, see udm-plugin-api Defining Protocols A protocol in XL Deploy is a method for making a connection to a host. Overthere, the XL Deploy remote execution framework, uses protocols to build a connection with a target machine. Protocol implementations are read by Overthere when XL Deploy starts. Classes implementing a protocol must adhere to the following requirements: - The class must implement the OverthereConnectionBuilderinterface. - The class must have the @Protocolannotation. - Define a custom host CI type that overrides the default value for property protocol. Example of a custom host CI type: <type type="custom.MyHost" extends="overthere.Host"> <property name="protocol" default="myProtocol" hidden="true"/> </type> The OverthereConnectionBuilder interface specifies only one method, connect. This method creates and returns a subclass of OverthereConnection representing a connection to the remote host. The connection must provide access to files ( OverthereFile instances) that XL Deploy uses to execute deployments. For more information, see the Overthere project. Defining Importers and ImportSources An importer is a class that turns a source into a collection of XL Deploy entities. Both the import source and the importer can be customized. XL Deploy includes a default importer that understands the DAR package format. Import sources are classes implementing the ImportSource interface and can be used to obtain a handle to the deployment package file to import. Import sources can also implement the ListableImporter interface, which indicates they can produce a list of possible files that can be imported. The user can make a selection of these options to start the import process. When the import source has been selected, all configured importers in XL Deploy are invoked, in turn, to determine if any importer is capable of handling the selected import source, using the canHandle method. The first importer that indicates it can handle the package is used to perform the import. The XL Deploy default importer is used as a fallback. The preparePackage method is invoked. This instructs the importer to produce a PackageInfo instance describing the package metadata. This data is used by XL Deploy to determine if the user requesting the import has sufficient rights to perform it. If so, the importer’s importEntities method is invoked, enabling the importer to read the import source, create deployables from the package and return a complete ImportedPackage instance. XL Deploy will handle storing of the package and contents. Defining Orchestrators An orchestrator is a class that performs the orchestration stage. The orchestrator is invoked after the delta-analysis phase, before the planning stage, and implements the Orchestrator interface containing a single method: Orchestration orchestrate(DeltaSpecification specification); For example, this is the Scala implementation of the default orchestrator: @Orchestrator.Metadata (name = "default", description = "The default orchestrator") class DefaultOrchestrator extends Orchestrator { def orchestrate(specification: DeltaSpecification) = interleaved(getDescriptionForSpec(specification), specification.getDeltas) } It takes all delta specifications and puts them together in a single, interleaved plan. This results in a deployment plan that is ordered solely on the basis of the step’s order property. In addition to the default orchestrator, XL Deploy also contains the following orchestrators: sequential-by-containerand parallel-by-containerorchestrator. These orchestrators group steps deal with the same container together, enabling deployments across a collection of middleware. sequential-by-composite-packageand parallel-by-composite-packageorchestrators. These orchestrators group together steps by contained package. The order of the member packages in the composite package is preserved. sequential-by-deployment-groupand parallel-by-deployment-grouporchestrators. These orchestrators use the deployment groupsynthetic property on a container to group steps for all containers with the same deployment group. These orchestrators are provided by a separate plugin that comes bundled with XL Deploy inside the plugins/directory. Defining Event Listeners XL Deploy sends events that listeners can act upon. There are two types of events in XL Deploy: - Notifications: Events that indicate XL Deploy has executed a particular action. - Commands: Events that indicate XL Deploy is about to execute a particular action. Commands are fired before an action takes place, while notifications are fired after an action has taken place. Listening for notifications Notifications indicate a particular action has occurred in XL Deploy. Some examples of notifications in XL Deploy are: - The system is started or stopped. - A user logs into or out of the system. - A CI is created, updated, moved or deleted. - A security role is created, updated or deleted. - A task, such as: deployment, undeployment, control task, or discovery; is started, cancelled, or aborted. Notification event listeners are Java classes that have the @DeployitEventListener annotation and have one or more methods annotated with the T2 event bus @Subscribe annotation. For example, this is the implementation of a class that logs all notifications it receives: import nl.javadude.t2bus.Subscribe; import com.xebialabs.deployit.engine.spi.event.AuditableDeployitEvent; import com.xebialabs.deployit.engine.spi.event.DeployitEventListener; import com.xebialabs.deployit.plugin.api.udm.ConfigurationItem; /** * This event listener logs auditable events using our standard logging facilities. **/ @DeployitEventListener public class TextLoggingAuditableEventListener { @Subscribe public void log(AuditableDeployitEvent event) { logger.info("[{}] - {} - {}", new Object[] { event.component, event.username, event.message }); } private static Logger logger = LoggerFactory.getLogger("audit"); } Listening for commands Commands indicate that XL Deploy has been asked to perform a particular action. Some examples of commands in XL Deploy are: - A request to create a CI or CIs has been received. - A request to update a CI has been received. - A request to delete a CI or CIs has been received. Command event listeners are Java classes that have the @DeployitEventListener annotation and have one or more methods annotated with the T2 event bus @Subscribe annotation. Command event listeners have the option of reject a particular command which causes it to not be executed. Veto event listeners indicate that they have the ability to reject the command in the Subscribe annotation and veto the command by throwing a VetoException from the event handler method. For example, this listener class listens for update CI commands and optionally vetoes them: @DeployitEventListener public class RepositoryCommandListener { public static final String ADMIN = "admin"; @Subscribe(canVeto = true) public void checkWhetherUpdateIsAllowed(UpdateCiCommand command) throws VetoException { checkUpdate(command.getUpdate(), newHashSet(command.getRoles()), command.getUsername()); } private void checkUpdate(final Update update, final Set<String> roles, final String username) { if(...) { throw new VetoException("UpdateCiCommand vetoed"); } } }
https://docs.xebialabs.com/v.9.0/xl-deploy/how-to/implement-custom-xl-deploy-plugpoints/
2019-12-06T04:30:30
CC-MAIN-2019-51
1575540484477.5
[]
docs.xebialabs.com
Core concepts Releases are at the heart of XL Release. A release represents a number of activities in a certain time period, with people working on them. XL Release allows you to plan, track, and execute releases automatically. It acts as a single source of truth for everyone who is involved in making the release a success. A release is divided into phases, which represent logical stages in the process that must happen in succession. For example, a release could include Development, QA, and Deployment phases. In XL Release, a phase is a grouping of tasks, which are the activities that must be done to fulfill the release. Tasks are activities in a release. In XL Release, everything that must be done is defined as a task. There are manual tasks, in which a human must do something, and automated tasks that the XL Release flow engine performs, unattended. When a release is started, XL Release executes the release flow. This is the workflow of the release. XL Release determines the current task that needs to be picked up and executes it (if it is an automated task) or sends a message to the person responsible for it (if it is a manual task). Each release has a release owner, the person that is responsible for correct performance of the release. If something goes wrong, the release owner will be notified. For example if an automated task produces an error, or one of the people working on a task indicates that he is in trouble. A template is a blueprint for a release. You can use a template to start different releases that have the same flow. A template is very similar to a release; however, some functionality is different because a template is never executed directly. For example, there are no start or end dates in a template; most tasks are assigned to teams rather than actual people; and variables are used as placeholders for information that differs from release to release, such as an application’s version number. Each release or release template defines a set of teams. A team is a logical grouping of people who perform a certain role. For example, on a release you can define a Development team, a QA team, an OPS team, and a Release Management team.
https://docs.xebialabs.com/v.9.0/xl-release/concept/core-concepts-of-xl-release/
2019-12-06T04:16:35
CC-MAIN-2019-51
1575540484477.5
[]
docs.xebialabs.com
Prerequisites None. 3 Editing the Server Configuration with Extra JVM Parameters In this section, edit the server configuration with extra JVM parameters, follow these steps: - Open the project settings. - Edit the configuration. Go to the Server tab on the Edit Configuration editor and add the following line to the Extra JVM parameters field: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 Next, start your application in Mendix.
https://docs.mendix.com/howto6/debug-java-actions-remotely
2017-08-16T15:25:36
CC-MAIN-2017-34
1502886102307.32
[array(['http://www.andrejkoelewijn.com/blog/images/2014/01/mx-java-debug/intellij_rundebug_configurations.png', 'Mendix Intellij remote debugging'], dtype=object) ]
docs.mendix.com
You can choose to update WooCommerce with one click or manually update it. Important: Before updating, we recommended that you back up your current WooCommerce installation and your WordPress database. See How To Update Your Site on how to make a backup and test before going live. One-Click Update ↑ Back to top ↑ Back to top Again, be certain you’ve read how to update your site. - Download the latest version of WooCommerce from WordPress.org. - Upload the unzipped WooCommerce folder to the wp-content/plugins directory on your web server overwriting the old files. Questions ↑ Back to top Still have a question and need assistance? Get in touch with a Happiness Engineer via the Help Desk.
https://docs.woocommerce.com/document/updating-woocommerce/
2017-08-16T15:08:16
CC-MAIN-2017-34
1502886102307.32
[]
docs.woocommerce.com
Help Center Local Navigation Improve sound quality for media files Depending on your BlackBerry® device model, this feature might not be supported. To improve sound quality for media files, you must be using stereo headphones with your device. Previous topic: Amplify the volume using the audio boost feature Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/22178/Improve_sound_quality_for_music_files_60_1143481_11.jsp
2014-12-18T07:23:02
CC-MAIN-2014-52
1418802765678.46
[]
docs.blackberry.com
"Hello, Boo!" This tutorial assumes you have basic computing knowledge and are comfortable with the command line interface of your operating system. If you're familiar with programming in general, you might just want to browse the code snippets and ignore the rest of the article in general. *Things required: *Boo. *A .NET runtime - either Microsoft's .NET Runtime on Windows, or Mono on Linux. *Your favorite text editor. *Being comfortable with the command-line, for now. *Free time. Before we even get started, its time to show you the obligatory "Hello, world!" application. Every language tutorial has them, and Boo is no exception! Everybody's gotta start somewhere. Crack open Your Favorite Text Editor. On the very first line, type, Save the file as hello.boo, and make a note of the directory its been saved in. Go to the command prompt and find the directory you installed Boo into, and go to the bin subdirectory. /boo/bin, if you will! Now, type, You'll see "Hello, world!" printed on screen. Welcome to Boo. "What is this 'print' thing?" Print is an expression in Boo that is used to feed output to a device called "Standard Output." Ignoring the concept of "Standard Output" completely, what it means (for now) is that the "print" expression will just show text in the console, like you just saw with the hello world program. "Hello, $name" What if we want to get really specific - what if we want to print out someone else's name instead of just saying hello to the entire freaking planet, eh? We could, of course, but everytime we wanted to say hello to someone new, we would be in quite a quandary! But, never fear, for your hero is here, and he will now show you how to read input from the user - don't worry, its really easy and you don't have to worry about it. Crack open hello.boo and replace its contents with, and save the file. We'll break it down line-by-line in a second, but for now, run "booi" just like you did before, and stare in awe: nothing's happening! All there is is a blinking cursor! Type your name and press enter. I typed "Bongo," because I'm a freak. Neat, eh, but what happened? System is a namespace - its like a box with lots of delicious snacktreats in it, or, if you're on a diet, like a box full of slightly stale protein bars. The "Console" class is one of these delicious treats, just waiting to be plucked from the box. We could have accessed the "Console" class by using "System.Console," but we didn't - why? Using the "import" keyword is a way of saying, "dump all the contents of the System namespace into my file so I don't have to keep typing the namespace, 'System,' before everything." Why would you do this? Because you're lazy, that's why. Here you are doing two things - you are calling a member of the "Console" class, called "ReadLine()", and storing a value it returns into "name." ReadLine() is a method that waits for the user to type something and press enter, and returns a string of characters. This string goes into the "name" object. Thus, were the user - an upstanding citizen such as yourself - to type "Bongo," then "name" would now have the contents "Bongo" after the user pressed the enter key. This is the easiest part of the program - its called "String Interpolation" The curly brace symbols essentially mean, 'embed this object inside of this string,' so when you write "$name" you are really saying, "replace $name with the contents of name." Since we typed in "Bongo" earlier and stored that in the name variable, instead of seeing "Hello, $name" printed on the screen we will instead see, "Hello, Bongo." Take special note: using $<object> actually calls a special member that every object has, called, 'ToString()' - this member returns a string that represents a formatted description of the object. Not all classes implement their own custom ToString() member, so you might see something strange like 'System.DateTime' instead of the actual date and time. Exercises: *Create a program that reads in the user's name and prints outs something like, "Your name is $name. Hello, $name!" except that $name is replaced with the user's name. *Create a program that reads in the user's first name, and then the user's last name, and print them together, like, "Your name is $firstname $lastname." You'll need at least two variables, and if you just read the last sentence, you'll probably have an inkling of how to do it. Re-examine the program if you are feeling lost. *Tip: There are many more classes available in the System namespace - go to Microsoft .NET Class guide and check out the namespaces available - there are tons of classes inside! Remember to use "import" or else you'll be typing System.Console" all year long. 5 Comments William Trenker As a next step, I'm trying to get the Hello World recipe working using WinForms. But when I try "import Windows.Systems.Forms" I get a Namespace error message suggesting "maybe you forgot to add an assembly reference." What step am I missing? (I'm quite experienced with Python and understand COM but I'm new to dotNET. I've got the dotNET 1.1 redistributable and SDK installed here on WinXP SP2. For boo itself I'm using booxw-1203-bin.zip) Thanks, Bill William Trenker Ok, I've dug in deeper and I'm learning. One discovery is the difference between the contents of boo-0.4.5-bin.zip and booxw-1203-bin.zip. I had intially installed only the latter. Then I experimented and learned that boo-0.4.5-bin.zip contains a full distribution. So with a full install I've found the various hello scripts in the examples folder. One thing I learned is to use "import System.Windows.Forms from System.Windows.Forms" not just "import System.Windows.Forms". Another ah-ah is the importance of using booi not booish. So now the hello world examples work. And booc makes .exe's – very nifty. Bill William Trenker Oh brother – as the fog starts to clear I realize I got totally confused about boo-xxx.zip and booxw-xxx.zip. I'll get this straight yet. Bill dholton dholton Hi, when you are compiling a boo script with booc, you need to add a reference to the System.Windows.Forms.dll. You can do this either on the command line (booc -r:System.Windows.Forms.dll -out:myscript.exe myscript.boo), or in your script change your import line to this: import System.Windows.Forms from System.Windows.Forms Also things have been changing so fast that you might want to get tortoisesvn and download the source from the SVN repository, see: And one last thing. If you want you can skip all the above and get the boo add-in (download the installer from that page) for the free SharpDevelop IDE. Then you don't have to use booc directly, and also that installer is very recent so it will have the newer features in boo. dholton dholton I forgot to mention, I didn't see your comments until today. If you send any questions you have to the Mailing Lists we can answer them quicker.
http://docs.codehaus.org/display/BOO/Hello+Boo+-+a+beginner+tutorial?focusedCommentId=17469
2014-12-18T07:31:13
CC-MAIN-2014-52
1418802765678.46
[]
docs.codehaus.org
Message-ID: <258505521.8369.1418887604713.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8368_1921979385.1418887604712" ------=_Part_8368_1921979385.1418887604712 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Lets assume a project committer wants to check out a Subversion reposito= ry. Assuming the repository has been laid out according to the recommendati= ons in How to Organize a Subversion Repository, the following command would= project1 located in the FOO= repository using the authenticated svnserve access method: svn co svn+ssh://svn.FOO.codehaus.org/home/projects/FOO/scm/project1/t= runk=20 If you wanted to check out a specific branch of project1, t= he following command could be used: svn co svn+ssh://svn.FOO.codehaus.org/home/projects/FOO/scm/project1/b= ranches/somebranch=20 It is important to note that when using the authenticated svnserve acces= s method ( svn+ssh protocol identifier) that you must specify t= he full path to the Subversion repository on the Codehaus server. This incl= udes the /home/projects directory. This is not the case when a= ccessing the repository anonymously.
http://docs.codehaus.org/exportword?pageId=1376
2014-12-18T07:26:44
CC-MAIN-2014-52
1418802765678.46
[]
docs.codehaus.org
This page represents the current plan; for discussion please check the tracker link above. Description Query is an interface with a single implementation ... DefaultQuery. As such it is not earning its keep and represents needless complexity. To simplify turn Query into a class and pull up method implementations from DefaultQuery: Patch is available on the attached bug report; it also updates the query Java docs (something else that can be improved without a proposal). Status This proposal is ready; there is a patch that can be applied. We are done and the patch has been applied to 2.7.x! - Andrea Aime +1 - Ben Caradoc-Davies +1 - Christian Mueller +1 - Ian Turton +0 - - Apply patch from - Update example code
http://docs.codehaus.org/display/GEOTOOLS/Query+as+Class
2014-12-18T07:26:38
CC-MAIN-2014-52
1418802765678.46
[]
docs.codehaus.org
Prior to the installation: - Get an overview of the Sonar technical architecture. - Check the requirements. Install the Sonar Runner or Analyzer The Sonar Analyzer can be triggered by 4 different bootstrappers: - Sonar Runner: recommended for non-Java projects, see installation and configuration guide. - Maven: recommended for Java projects built with Maven, see installation and configuration guide. - Ant Task: recommended for Java projects built with Ant, see installation and configuration guide. - Gradle: recommended for Java projects built with Gradle, see installation and configuration guide. Install Sonar Download and unzip the distribution. Database Install): Advanced Intallation To run Sonar behind a proxy, browse this page.
http://docs.codehaus.org/pages/viewpage.action?pageId=229737032
2014-12-18T07:32:41
CC-MAIN-2014-52
1418802765678.46
[]
docs.codehaus.org
The performance data that is collected during a profiling session, can be presented and analyzed in a snapshot. Once the snapshot is taken, you can study the data, locate a problem and plan what action to proceed with. The following views can be used to analyze performance snapshots: Each view can be displayed in different ways for more convenient analysis by grouping, sorting and filtering nodes. Furthermore you have the ability to Compare Snapshots in order to track changes.
http://docs.telerik.com/help/justtrace/snapshots-performance-snapshots.html
2017-09-19T19:23:15
CC-MAIN-2017-39
1505818685993.12
[]
docs.telerik.com
Improving Amazon Redshift Spectrum Query Performance Look at the query plan to find what steps have been pushed to the Amazon Redshift Spectrum layer. The following steps are related to the Redshift Spectrum query: S3 Seq Scan S3 HashAggregate S3 Query Scan Seq Scan PartitionInfo Partition Loop The following example shows the query plan for a query that joins an external table with a local table. Note the S3 Seq Scan and S3 HashAggregate steps that were executed against the data on Amazon S3. Copy explain select top 10 spectrum.sales.eventid, sum(spectrum.sales.pricepaid) from spectrum.sales, event where spectrum.sales.eventid = event.eventid and spectrum.sales.pricepaid > 30 group by spectrum.sales.eventid order by 2 desc; Copy QUERY PLAN ----------------------------------------------------------------------------- XN Limit (cost=1001055770628.63..1001055770628.65 rows=10 width=31) -> XN Merge (cost=1001055770628.63..1001055770629.13 rows=200 width=31) Merge Key: sum(sales.derived_col2) -> XN Network (cost=1001055770628.63..1001055770629.13 rows=200 width=31) Send to leader -> XN Sort (cost=1001055770628.63..1001055770629.13 rows=200 width=31) Sort Key: sum(sales.derived_col2) -> XN HashAggregate (cost=1055770620.49..1055770620.99 rows=200 width=31) -> XN Hash Join DS_BCAST_INNER (cost=3119.97..1055769620.49 rows=200000 width=31) Hash Cond: ("outer".derived_col1 = "inner".eventid) -> XN S3 Query Scan sales (cost=3010.00..5010.50 rows=200000 width=31) -> S3 HashAggregate (cost=3010.00..3010.50 rows=200000 width=16) -> S3 Seq Scan spectrum.sales location:"s3://awssampledbuswest2/tickit/spectrum/sales" format:TEXT (cost=0.00..2150.00 rows=172000 width=16) Filter: (pricepaid > 30.00) -> XN Hash (cost=87.98..87.98 rows=8798 width=4) -> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=4) Note the following elements in the query plan: The S3 Seq Scannode shows the filter pricepaid > 30.00was processed in the Redshift Spectrum layer. A filter node under the XN S3 Query Scannode indicates predicate processing in Amazon Redshift on top of the data returned from the Redshift Spectrum layer. The S3 HashAggregatenode indicates aggregation in the Redshift Spectrum layer for the group by clause ( group by spectrum.sales.eventid). Following are ways to improve Redshift Spectrum performance: Use Parquet formatted data files. Parquet stores data in a columnar format, so Redshift Spectrum can eliminate unneeded columns from the scan. When data is in textfile format, Redshift Spectrum needs to scan the entire file. Use the fewest columns possible in your queries. Use multiple files to optimize for parallel processing. Keep your file sizes between 100 MB and 1 GB. Avoid data size skew by keeping files about the same size. Put your large fact tables in Amazon S3 and keep your frequently used, smaller dimension tables in your local Amazon Redshift database. Update external table statistics by setting the TABLE PROPERTIES numRows parameter. Use CREATE EXTERNAL TABLE or ALTER TABLE to set the TABLE PROPERTIES numRows parameter to reflect the number of rows in the table. Amazon Redshift doesn't analyze external tables to generate the table statistics that the query optimizer uses to generate a query plan. If table statistics aren't set for an external table, Amazon Redshift generates a query execution plan based on an assumption that external tables are the larger tables and local tables are the smaller tables. The Amazon Redshift query planner pushes predicates and aggregations to the Redshift Spectrum query layer whenever possible. When large amounts of data are returned from Amazon S3, the processing is limited by your cluster's resources. Redshift Spectrum scales automatically to process large requests. Thus, your overall performance improves whenever you can push processing to the Redshift Spectrum layer. Write your queries to use filters and aggregations that are eligible to be pushed to the Redshift Spectrum layer. The following are examples of some operations that can be pushed to the Redshift Spectrum layer: GROUP BY clauses Comparison conditions and pattern-matching conditions, such as LIKE. Aggregate functions, such as COUNT, SUM, AVG, MIN, and MAX. String functions. Operations that can't be pushed to the Redshift Spectrum layer include DISTINCT and ORDER BY. Use partitions to limit the data that is scanned. Partition your data based on your most common query predicates, then prune partitions by filtering on partition columns. For more information, see Partitioning Redshift Spectrum External Tables. Query SVL_S3PARTITION to view total partitions and qualified partitions.
http://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-performance.html
2017-09-19T19:13:17
CC-MAIN-2017-39
1505818685993.12
[]
docs.aws.amazon.com
This is the first series of tutorials for Rainmeter. It is meant to follow Getting Started and assumes that you have gone through the steps described there. If you have not been through these steps, you should start here. This series starts by creating a simple first skin, then covers the anatomy of Rainmeter skins, focusing on each of the basic skin properties in turn: Meters A skin's display elements. How to create objects that can be seen and clicked. Measures A skin's informational elements. How to pull data from your computer or the Internet. Bangs A skin's interactive elements. How to send commands to Rainmeter or other applications. Updates A skin's internal clock. How to control the timing and synchronicity of skin events. Variables A skin's data elements. How to manipulate independent strings that are used to store many kinds of information. This series is in development. Check back soon for the first installment. « Back to: Anatomy of a Skin
https://docs.rainmeter.net/manual-beta/getting-started/basic-tutorials/
2017-09-19T19:02:13
CC-MAIN-2017-39
1505818685993.12
[]
docs.rainmeter.net
Measure=Calc calculates mathematical formulas. Options - General measure options - All general measure options are valid. FormulaDefault: 0 - Formula to calculate. The syntax can be found on the Formulas page. The calc measure utilizes addition syntax as described below. UpdateRandomDefault: 0 - If set to 1, the random constant is regenerated on each update cycle. UniqueRandomDefault: 0 - If set to 1, any measure using the random constant and UpdateRandom will not repeat until all values between and including LowBound and HighBound have been used. Any dynamic change to LowBound or HighBound will reset the unique tracking of values. Note: UniqueRandom will only function if the difference between LowBoundand HighBoundis at most a 16bit unsigned integer, or 65535. LowBoundDefault: 0.0 - Lower bound of the random constant. HighBoundDefault: 100.0 - Upper bound of the random constant. Note: The maximum value of HighBoundis a 32bit signed integer, or 2147483647. Additional Formula Syntax Functions Note: These functions are only available in the context the Formula option of a Calc measure, and not in other (formulas) in other measure or meter types. Random: A random number. The number will be between and include the values set in LowBound and HighBound. Counter: The number of update cycles from the time the skin is loaded. This number only resets when the skin is unloaded and then loaded again - not when the skin is refreshed. Other Bases The Calc measure allows numbers to be represented numbering systems other than decimal. To use another base, prefix the number with a zero then the letter representing the system you wish to use. The following are accepted prefixes, which are case (lower) sensitive: 0b- Binary number (base 2) (ex: 0b110110 - returns 54 in decimal) 0o- Octal number (base 8) (ex: 0o123 - returns 83 in decimal) 0x- Hexadecimal number (base 16) (ex: 0xF1 - returns 241 in decimal) Measure Values The use of measure values in the Formula option of a Calc measure does not require the use of Dynamic Variables. Simply omit the [] and the number value of the measure will be retrieved. If the measure does not have a number value, 0 will be used. Section Variables, Variables and Built-in Variables follow the normal rules. For more information see the Dynamic Cheat Sheet.
https://docs.rainmeter.net/manual-beta/measures/calc/
2017-09-19T19:01:31
CC-MAIN-2017-39
1505818685993.12
[]
docs.rainmeter.net
Plugin=iTunesPlugin retrieves the currently playing track infromation from the iTunes application. Note: The iTunes plugin is deprecated. Use the NowPlaying plugin instead. Options - General measure options - All general measure options are valid. Command - Defines the information to measure. Valid values are: GetSoundVolume: Player volume between 0 - 100. GetPlayerPosition: Player position in seconds. GetPlayerPositionPercent: Player position as a percentage. GetCurrentTrackAlbum: Album. GetCurrentTrackArtist: Artist. GetCurrentTrackBitrateBitrate. GetCurrentTrackBPM: Beats per minute. GetCurrentTrackComment: Track comment. GetCurrentTrackComposer: Track composer. GetCurrentTrackEQ: EQ preset name. GetCurrentTrackGenre: Genre (category). GetCurrentTrackKindAsString: File description. GetCurrentTrackName: Track name. GetCurrentTrackRating: Rating from 0 - 100. GetCurrentTrackSampleRate: Sample rate. GetCurrentTrackSize: File size. GetCurrentTrackTime: Length of the track. GetCurrentTrackTrackCount: Number of tracks on the album. GetCurrentTrackTrackNumber: Track number or index. GetCurrentTrackYear: Track year. GetCurrentTrackArtwork: Artwork file path. Use in combination with DefaultArtwork. DefaultArtwork - Path of the artwork folder relative to the skin folder. Used with Command=GetCurrentTrackArtwork. Bangs iTunes measures can be controlled with the !CommandMeasure bang with the argument parameter being: Backtrack: Reposition to the beginning of the current track, or go to the previous track if already at start of current track. FastForward: Skip forward in a playing track. NextTrack: Advance to the next track in the current playlist. Pause: Pause playback. Play: Play the currently targeted track. PlayPause: Toggle the playing/paused state of the current track. PreviousTrack: Return to the previous track in the current playlist. Resume: Disable fast forward/rewind and resume playback if playing. Rewind: Skip backwards in a playing track. Stop: Stop playback. Power: Open/close iTunes application. Quit: Exit the iTunes application. SoundVolumeUp: Turn the volume up 5%. SoundVolumeDown: Turn the volume down 5%. ToggleiTunes: Show/hide iTunes window.
https://docs.rainmeter.net/manual-beta/plugins/itunes/
2017-09-19T19:02:52
CC-MAIN-2017-39
1505818685993.12
[]
docs.rainmeter.net
From Krita Documentation Category:Preferences Krita is highly customizable and makes many settings and options available to customize through the Preferences area. These settings are accessed by going to. On MacOS, the settings are under the topleft menu area, as you would expect of any program under MacOS. Krita's preferences are saved in the file kritarc. This file is located in C:\Users\*username*\AppData\Local\krita on Windows, ~/.config/krita on Linux, and ~/Library/Preferences/krita on OS X. If you would like to back up your custom settings or synchronize them from one computer to another, you can just copy this file. It even works across platforms! Custom shortcuts are saved in a separate file kritashortcutsrc which can also be backed up in the same way. This is discussed further in the shortcuts section. Pages in category ‘Preferences’ The following 11 pages are in this category, out of 11 total. - This page was last modified on 22 August 2017, at 22:38. - Content is available under GNU Free Documentation Licence 1.3 or later unless otherwise noted. - About Krita Documentation - Disclaimers
https://docs.krita.org/Category:Preferences
2017-09-19T18:51:24
CC-MAIN-2017-39
1505818685993.12
[]
docs.krita.org
All content with label balancing+jboss. Related Labels: high, wildfly, realm, tutorial, service, 2012, eap, ssl, eap6, s, load, security, modcluster, cli, getting_started, availability, clustering, cluster, mod_jk, domain, tomcat, storeconfig, wildly, favourite, l, httpd, as, ha, high-availability, mod_cluster, as7, ews, http more » ( - balancing, - jboss ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/balancing+jboss
2018-05-20T17:07:56
CC-MAIN-2018-22
1526794863662.15
[]
docs.jboss.org
Get started with Field Service Management These topics describe how to activate Field Service Management and configure it for use. Activate Field Service ManagementYou can activate the Field Service Management plugin (com.snc.work_management) if you have the admin role. This plugin activates related plugins if they are not already active.Configure field service managementField service management defaults to the request-driven method for handling work order tasks. Administrators in the global domain can set field service configurations to determine how the system handles daily operations.
https://docs.servicenow.com/bundle/geneva-service-management-for-the-enterprise/page/product/planning_and_policy/concept/c_GetStartedWithFieldService.html
2018-05-20T17:58:08
CC-MAIN-2018-22
1526794863662.15
[]
docs.servicenow.com
Level.GetMembers 메서드 Returns a MemberCollection that contains a collection of members for the Level. Note This method loads all members of the level. If there are a large number of members, this method may take a long time to execute. 문 ‘선언 Public Function GetMembers As MemberCollection ‘사용 방법 Dim instance As Level Dim returnValue As MemberCollection returnValue = instance.GetMembers() public MemberCollection GetMembers() public: MemberCollection^ GetMembers() member GetMembers : unit -> MemberCollection public function GetMembers() : MemberCollection 반환 값 유형: Microsoft.AnalysisServices.AdomdClient.MemberCollection A MemberCollection that contains a collection of members for the Level. 주의 The GetMembers method provides a collection of Member objects that represent the members that are associated with the level..AnalysisServices.AdomdClient 네임스페이스
https://docs.microsoft.com/ko-kr/previous-versions/sql/sql-server-2012/ms126788(v=sql.110)
2018-05-20T18:27:26
CC-MAIN-2018-22
1526794863662.15
[]
docs.microsoft.com
When you capture the output of a script step as a comment, you can use the script's output or exit code as the input to a subsequent step in the workflow. The following step types allow script output as input: •Set Custom Attribute step: Use script output to set a custom attribute value for a VM. The Set Custom Attribute step can be used in completion workflows and command workflows. •Any type of email step: Use script output to populate the Address List field. The following variables are available: #{steps[x].output} #{steps[x].exitCode} where x is a step number (beginning at 1) or a step name. For example, to use the output of the third step in a workflow, add this syntax to a subsequent step in the workflow: #{steps[3].output} In the following completion workflow example, the first step is an Execute Script step that backs up a VM and records the time of the backup. Step 1 is configured to capture the script output as comment. The second step is a Set Custom Attribute step that sets a value for the preconfigured custom attribute "Last Backup Time". We use the output from step 1 (the time of the last backup) as input for the custom attribute value. In the following approval workflow example, the first step is an Execute Script step that queries Active Directory for the email address of the requester's manager. Step 1 is configured to capture the script output as comment. The second step is a Send Approval Email step. We use the output from step 1 (the email address of the requester's manager) as input for the Address List field. To see how this functionality fits into an end-to-end example, see Walk-Through: Creating a Workflow.
http://docs.embotics.com/script_output_as_step_input.htm
2018-05-20T17:42:24
CC-MAIN-2018-22
1526794863662.15
[]
docs.embotics.com
Create and manage SQL Database elastic jobs using PowerShell (preview) The PowerShell APIs for Elastic Database jobs (in preview), let you define a group of databases against which scripts will execute. This article shows how to create and manage Elastic Database jobs using PowerShell cmdlets. See Elastic jobs overview. Prerequisites - An Azure subscription. For a free trial, see Free one-month trial. - A set of databases created with the Elastic Database tools. See Get started with Elastic Database tools. - Azure PowerShell. For detailed information, see How to install and configure Azure PowerShell. - Elastic Database jobs PowerShell package: See Installing Elastic Database jobs Select your Azure subscription To select the subscription you need your subscription Id (-SubscriptionId) or subscription name (-SubscriptionName). If you have multiple subscriptions you can run the Get-AzureRmSubscription cmdlet and copy the desired subscription information from the result set. Once you have your subscription information, run the following commandlet to set this subscription as the default, namely the target for creating and managing jobs: Select-AzureRmSubscription -SubscriptionId {SubscriptionID} The PowerShell ISE is recommended for usage to develop and execute PowerShell scripts against the Elastic Database jobs. Elastic Database jobs objects The following table lists out all the object types of Elastic Database jobs along with its description and relevant PowerShell APIs. Supported Elastic Database jobs group types The job executes Transact-SQL (T-SQL) scripts or application of DACPACs across a group of databases. When a job is submitted to be executed across a group of databases, the job “expands” the into child jobs where each performs the requested execution against a single database in the group. There are two types of groups that you can create: - Shard Map group: When a job is submitted to target a shard map, the job queries the shard map to determine its current set of shards, and then creates child jobs for each shard in the shard map. - Custom Collection group: A custom defined set of databases. When a job targets a custom collection, it creates child jobs for each database currently in the custom collection. To set the Elastic Database jobs connection A connection needs to be set to the jobs control database prior to using the jobs APIs. Running this cmdlet triggers a credential window to pop up requesting the user name and password created when installing Elastic Database jobs. All examples provided within this topic assume that this first step has already been performed. Open a connection to the Elastic Database jobs: Use-AzureSqlJobConnection -CurrentAzureSubscription Encrypted credentials within the Elastic Database jobs Database credentials can be inserted into the jobs control database with its password encrypted. It is necessary to store credentials to enable jobs to be executed at a later time, (using job schedules). Encryption works through a certificate created as part of the installation script. The installation script creates and uploads the certificate into the Azure Cloud Service for decryption of the stored encrypted passwords. The Azure Cloud Service later stores the public key within the jobs control database which enables the PowerShell API or Azure portal interface to encrypt a provided password without requiring the certificate to be locally installed. The credential passwords are encrypted and secure from users with read-only access to Elastic Database jobs objects. But it is possible for a malicious user with read-write access to Elastic Database Jobs objects to extract a password. Credentials are designed to be reused across job executions. Credentials are passed to target databases when establishing connections. There are currently no restrictions on the target databases used for each credential, malicious user could add a database target for a database under the malicious user's control. The user could subsequently start a job targeting this database to gain the credential's password. Security best practices for Elastic Database jobs include: - Limit usage of the APIs to trusted individuals. - Credentials should have the least privileges necessary to perform the job task. More information can be seen within this Authorization and Permissions SQL Server MSDN article. To create an encrypted credential for job execution across databases To create a new encrypted credential, the Get-Credential cmdlet prompts for a user name and password that can be passed to the New-AzureSqlJobCredential cmdlet. $credentialName = "{Credential Name}" $databaseCredential = Get-Credential $credential = New-AzureSqlJobCredential -Credential $databaseCredential -CredentialName $credentialName Write-Output $credential To update credentials When passwords change, use the Set-AzureSqlJobCredential cmdlet and set the CredentialName parameter. $credentialName = "{Credential Name}" Set-AzureSqlJobCredential -CredentialName $credentialName -Credential $credential To define an Elastic Database shard map target To execute a job against all databases in a shard set (created using Elastic Database client library), use a shard map as the database target. This example requires a sharded application created using the Elastic Database client library. See Getting started with Elastic Database tools sample. The shard map manager database must be set as a database target and then the specific shard map must be specified as a target. $shardMapCredentialName = "{Credential Name}" $shardMapDatabaseName = "{ShardMapDatabaseName}" #example: ElasticScaleStarterKit_ShardMapManagerDb $shardMapDatabaseServerName = "{ShardMapServerName}" $shardMapName = "{MyShardMap}" #example: CustomerIDShardMap $shardMapDatabaseTarget = New-AzureSqlJobTarget -DatabaseName $shardMapDatabaseName -ServerName $shardMapDatabaseServerName $shardMapTarget = New-AzureSqlJobTarget -ShardMapManagerCredentialName $shardMapCredentialName -ShardMapManagerDatabaseName $shardMapDatabaseName -ShardMapManagerServerName $shardMapDatabaseServerName -ShardMapName $shardMapName Write-Output $shardMapTarget Create a T-SQL Script for execution across databases When creating T-SQL scripts for execution, it is highly recommended to build them to be idempotent and resilient against failures. Elastic Database jobs will retry execution of a script whenever execution encounters a failure, regardless of the classification of the failure. Use the New-AzureSqlJobContent cmdlet to create and save a script for execution and set the -ContentName and -CommandText parameters. $scriptName = "Create a TestTable" $scriptCommandText = " IF NOT EXISTS (SELECT name FROM sys.tables WHERE name = 'TestTable') BEGIN CREATE TABLE TestTable( TestTableId INT PRIMARY KEY IDENTITY, InsertionTime DATETIME2 ); END GO INSERT INTO TestTable(InsertionTime) VALUES (sysutcdatetime()); GO" $script = New-AzureSqlJobContent -ContentName $scriptName -CommandText $scriptCommandText Write-Output $script Create a new script from a file If the T-SQL script is defined within a file, use this to import the script: $scriptName = "My Script Imported from a File" $scriptPath = "{Path to SQL File}" $scriptCommandText = Get-Content -Path $scriptPath $script = New-AzureSqlJobContent -ContentName $scriptName -CommandText $scriptCommandText Write-Output $script To update a T-SQL script for execution across databases This PowerShell script updates the T-SQL command text for an existing script. Set the following variables to reflect the desired script definition to be set: $scriptName = "Create a TestTable" $scriptUpdateComment = "Adding AdditionalInformation column to TestTable" $scriptCommandText = " IF NOT EXISTS (SELECT name FROM sys.tables WHERE name = 'TestTable') BEGIN CREATE TABLE TestTable( TestTableId INT PRIMARY KEY IDENTITY, InsertionTime DATETIME2 ); END GO IF NOT EXISTS (SELECT columns.name FROM sys.columns INNER JOIN sys.tables on columns.object_id = tables.object_id WHERE tables.name = 'TestTable' AND columns.name = 'AdditionalInformation') BEGIN ALTER TABLE TestTable ADD AdditionalInformation NVARCHAR(400); END GO INSERT INTO TestTable(InsertionTime, AdditionalInformation) VALUES (sysutcdatetime(), 'test'); GO" To update the definition to an existing script Set-AzureSqlJobContentDefinition -ContentName $scriptName -CommandText $scriptCommandText -Comment $scriptUpdateComment To create a job to execute a script across a shard map This PowerShell script starts a job for execution of a script across each shard in an Elastic Scale shard map. Set the following variables to reflect the desired script and target: $jobName = "{Job Name}" $scriptName = "{Script Name}" $shardMapServerName = "{Shard Map Server Name}" $shardMapDatabaseName = "{Shard Map Database Name}" $shardMapName = "{Shard Map Name}" $credentialName = "{Credential Name}" $shardMapTarget = Get-AzureSqlJobTarget -ShardMapManagerDatabaseName $shardMapDatabaseName -ShardMapManagerServerName $shardMapServerName -ShardMapName $shardMapName $job = New-AzureSqlJob -ContentName $scriptName -CredentialName $credentialName -JobName $jobName -TargetId $shardMapTarget.TargetId Write-Output $job To execute a job This PowerShell script executes an existing job: Update the following variable to reflect the desired job name to have executed: $jobName = "{Job Name}" $jobExecution = Start-AzureSqlJobExecution -JobName $jobName Write-Output $jobExecution To retrieve the state of a single job execution Use the Get-AzureSqlJobExecution cmdlet and set the JobExecutionId parameter to view the state of job execution. $jobExecutionId = "{Job Execution Id}" $jobExecution = Get-AzureSqlJobExecution -JobExecutionId $jobExecutionId Write-Output $jobExecution Use the same Get-AzureSqlJobExecution cmdlet with the IncludeChildren parameter to view the state of child job executions, namely the specific state for each job execution against each database targeted by the job. $jobExecutionId = "{Job Execution Id}" $jobExecutions = Get-AzureSqlJobExecution -JobExecutionId $jobExecutionId -IncludeChildren Write-Output $jobExecutions To view the state across multiple job executions The Get-AzureSqlJobExecution cmdlet has multiple optional parameters that can be used to display multiple job executions, filtered through the provided parameters. The following demonstrates some of the possible ways to use Get-AzureSqlJobExecution: Retrieve all active top level job executions: Get-AzureSqlJobExecution Retrieve all top level job executions, including inactive job executions: Get-AzureSqlJobExecution -IncludeInactive Retrieve all child job executions of a provided job execution ID, including inactive job executions: $parentJobExecutionId = "{Job Execution Id}" Get-AzureSqlJobExecution -AzureSqlJobExecution -JobExecutionId $parentJobExecutionId -IncludeInactive -IncludeChildren Retrieve all job executions created using a schedule / job combination, including inactive jobs: $jobName = "{Job Name}" $scheduleName = "{Schedule Name}" Get-AzureSqlJobExecution -JobName $jobName -ScheduleName $scheduleName -IncludeInactive Retrieve all jobs targeting a specified shard map, including inactive jobs: $shardMapServerName = "{Shard Map Server Name}" $shardMapDatabaseName = "{Shard Map Database Name}" $shardMapName = "{Shard Map Name}" $target = Get-AzureSqlJobTarget -ShardMapManagerDatabaseName $shardMapDatabaseName -ShardMapManagerServerName $shardMapServerName -ShardMapName $shardMapName Get-AzureSqlJobExecution -TargetId $target.TargetId -IncludeInactive Retrieve all jobs targeting a specified custom collection, including inactive jobs: $customCollectionName = "{Custom Collection Name}" $target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName Get-AzureSqlJobExecution -TargetId $target.TargetId -IncludeInactive Retrieve the list of job task executions within a specific job execution: $jobExecutionId = "{Job Execution Id}" $jobTaskExecutions = Get-AzureSqlJobTaskExecution -JobExecutionId $jobExecutionId Write-Output $jobTaskExecutions Retrieve job task execution details: The following PowerShell script can be used to view the details of a job task execution, which is particularly useful when debugging execution failures. $jobTaskExecutionId = "{Job Task Execution Id}" $jobTaskExecution = Get-AzureSqlJobTaskExecution -JobTaskExecutionId $jobTaskExecutionId Write-Output $jobTaskExecution To retrieve failures within job task executions The JobTaskExecution object includes a property for the lifecycle of the task along with a message property. If a job task execution failed, the lifecycle property will be set to Failed and the message property will be set to the resulting exception message and its stack. If a job did not succeed, it is important to view the details of job tasks that did not succeed for a given job. $jobExecutionId = "{Job Execution Id}" $jobTaskExecutions = Get-AzureSqlJobTaskExecution -JobExecutionId $jobExecutionId Foreach($jobTaskExecution in $jobTaskExecutions) { if($jobTaskExecution.Lifecycle -ne 'Succeeded') { Write-Output $jobTaskExecution } } To wait for a job execution to complete The following PowerShell script can be used to wait for a job task to complete: $jobExecutionId = "{Job Execution Id}" Wait-AzureSqlJobExecution -JobExecutionId $jobExecutionId Create a custom execution policy Elastic Database jobs supports creating custom execution policies that can be applied when starting jobs. Execution policies currently allow for defining: - Name: Identifier for the execution policy. - Job Timeout: Total time before a job will be canceled by Elastic Database Jobs. - Initial Retry Interval: Interval to wait before first retry. - Maximum Retry Interval: Cap of retry intervals to use. - Retry Interval Backoff Coefficient: Coefficient used to calculate the next interval between retries. The following formula is used: (Initial Retry Interval) * Math.pow((Interval Backoff Coefficient), (Number of Retries) - 2). - Maximum Attempts: The maximum number of retry attempts to perform within a job. The default execution policy uses the following values: - Name: Default execution policy - Job Timeout: 1 week - Initial Retry Interval: 100 milliseconds - Maximum Retry Interval: 30 minutes - Retry Interval Coefficient: 2 - Maximum Attempts: 2,147,483,647 Create the desired execution policy: $executionPolicyName = "{Execution Policy Name}" $initialRetryInterval = New-TimeSpan -Seconds 10 $jobTimeout = New-TimeSpan -Minutes 30 $maximumAttempts = 999999 $maximumRetryInterval = New-TimeSpan -Minutes 1 $retryIntervalBackoffCoefficient = 1.5 $executionPolicy = New $executionPolicy Update a custom execution policy Update the desired execution policy to update: $executionPolicyName = "{Execution Policy Name}" $initialRetryInterval = New-TimeSpan -Seconds 15 $jobTimeout = New-TimeSpan -Minutes 30 $maximumAttempts = 999999 $maximumRetryInterval = New-TimeSpan -Minutes 1 $retryIntervalBackoffCoefficient = 1.5 $updatedExecutionPolicy = Set $updatedExecutionPolicy Cancel a job Elastic Database Jobs supports cancellation requests of jobs. If Elastic Database Jobs detects a cancellation request for a job currently being executed, it will attempt to stop the job. There are two different ways that Elastic Database Jobs can perform a cancellation: - Cancel currently executing tasks: If a cancellation is detected while a task is currently running, a cancellation will be attempted within the currently executing aspect of the task. For example: If there is a long running query currently being performed when a cancellation is attempted, there will be an attempt to cancel the query. - Canceling task retries: If a cancellation is detected by the control thread before a task is launched for execution, the control thread will avoid launching the task and declare the request as canceled. If a job cancellation is requested for a parent job, the cancellation request will be honored for the parent job and for all of its child jobs. To submit a cancellation request, use the Stop-AzureSqlJobExecution cmdlet and set the JobExecutionId parameter. $jobExecutionId = "{Job Execution Id}" Stop-AzureSqlJobExecution -JobExecutionId $jobExecutionId To delete a job and job history asynchronously Elastic Database jobs supports asynchronous deletion of jobs. A job can be marked for deletion and the system will delete the job and all its job history after all job executions have completed for the job. The system will not automatically cancel active job executions. Invoke Stop-AzureSqlJobExecution to cancel active job executions. To trigger job deletion, use the Remove-AzureSqlJob cmdlet and set the JobName parameter. $jobName = "{Job Name}" Remove-AzureSqlJob -JobName $jobName To create a custom database target You can define custom database targets either for direct execution or for inclusion within a custom database group. For example, because elastic pools are not yet directly supported using PowerShell APIs, you can create a custom database target and custom database collection target which encompasses all the databases in the pool. Set the following variables to reflect the desired database information: $databaseName = "{Database Name}" $databaseServerName = "{Server Name}" New-AzureSqlJobTarget -DatabaseName $databaseName -ServerName $databaseServerName To create a custom database collection target Use the New-AzureSqlJobTarget cmdlet to define a custom database collection target to enable execution across multiple defined database targets. After creating a database group, databases can be associated with the custom collection target. Set the following variables to reflect the desired custom collection target configuration: $customCollectionName = "{Custom Database Collection Name}" New-AzureSqlJobTarget -CustomCollectionName $customCollectionName To add databases to a custom database collection target To add a database to a specific custom collection use the Add-AzureSqlJobChildTarget cmdlet. $databaseServerName = "{Database Server Name}" $databaseName = "{Database Name}" $customCollectionName = "{Custom Database Collection Name}" Add-AzureSqlJobChildTarget -CustomCollectionName $customCollectionName -DatabaseName $databaseName -ServerName $databaseServerName Review the databases within a custom database collection target Use the Get-AzureSqlJobTarget cmdlet to retrieve the child databases within a custom database collection target. $customCollectionName = "{Custom Database Collection Name}" $target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName $childTargets = Get-AzureSqlJobTarget -ParentTargetId $target.TargetId Write-Output $childTargets Create a job to execute a script across a custom database collection target Use the New-AzureSqlJob cmdlet to create a job against a group of databases defined by a custom database collection target. Elastic Database jobs will expand the job into multiple child jobs each corresponding to a database associated with the custom database collection target and ensure that the script is executed against each database. Again, it is important that scripts are idempotent to be resilient to retries. $jobName = "{Job Name}" $scriptName = "{Script Name}" $customCollectionName = "{Custom Collection Name}" $credentialName = "{Credential Name}" $target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName $job = New-AzureSqlJob -JobName $jobName -CredentialName $credentialName -ContentName $scriptName -TargetId $target.TargetId Write-Output $job Data collection across databases You can use a job to execute a query across a group of databases and send the results to a specific table. The table can be queried after the fact to see the query’s results from each database. This provides an asynchronous method to execute a query across many databases. Failed attempts are handled automatically via retries. The specified destination table will be automatically created if it does not yet exist. The new table matches the schema of the returned result set. If a script returns multiple result sets, Elastic Database jobs will only send the first to the destination table. The following PowerShell script executes a script and collects its results into a specified table. This script assumes that a T-SQL script has been created which outputs a single result set and that a custom database collection target has been created. This script uses the Get-AzureSqlJobTarget cmdlet. Set the parameters for script, credentials, and execution target: $jobName = "{Job Name}" $scriptName = "{Script Name}" $executionCredentialName = "{Execution Credential Name}" $customCollectionName = "{Custom Collection Name}" $destinationCredentialName = "{Destination Credential Name}" $destinationServerName = "{Destination Server Name}" $destinationDatabaseName = "{Destination Database Name}" $destinationSchemaName = "{Destination Schema Name}" $destinationTableName = "{Destination Table Name}" $target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName To create and start a job for data collection scenarios This script uses the Start-AzureSqlJobExecution cmdlet. $job = New-AzureSqlJob -JobName $jobName -CredentialName $executionCredentialName -ContentName $scriptName -ResultSetDestinationServerName $destinationServerName -ResultSetDestinationDatabaseName $destinationDatabaseName -ResultSetDestinationSchemaName $destinationSchemaName -ResultSetDestinationTableName $destinationTableName -ResultSetDestinationCredentialName $destinationCredentialName -TargetId $target.TargetId Write-Output $job $jobExecution = Start-AzureSqlJobExecution -JobName $jobName Write-Output $jobExecution To schedule a job execution trigger The following PowerShell script can be used to create a recurring schedule. This script uses a minute interval, but New-AzureSqlJobSchedule also supports -DayInterval, -HourInterval, -MonthInterval, and -WeekInterval parameters. Schedules that execute only once can be created by passing -OneTime. Create a new schedule: $scheduleName = "Every one minute" $minuteInterval = 1 $startTime = (Get-Date).ToUniversalTime() $schedule = New-AzureSqlJobSchedule -MinuteInterval $minuteInterval -ScheduleName $scheduleName -StartTime $startTime Write-Output $schedule To trigger a job executed on a time schedule A job trigger can be defined to have a job executed according to a time schedule. The following PowerShell script can be used to create a job trigger. Use New-AzureSqlJobTrigger and set the following variables to correspond to the desired job and schedule: $jobName = "{Job Name}" $scheduleName = "{Schedule Name}" $jobTrigger = New-AzureSqlJobTrigger -ScheduleName $scheduleName -JobName $jobName Write-Output $jobTrigger To remove a scheduled association to stop job from executing on schedule To discontinue reoccurring job execution through a job trigger, the job trigger can be removed. Remove a job trigger to stop a job from being executed according to a schedule using the Remove-AzureSqlJobTrigger cmdlet. $jobName = "{Job Name}" $scheduleName = "{Schedule Name}" Remove-AzureSqlJobTrigger -ScheduleName $scheduleName -JobName $jobName Retrieve job triggers bound to a time schedule The following PowerShell script can be used to obtain and display the job triggers registered to a particular time schedule. $scheduleName = "{Schedule Name}" $jobTriggers = Get-AzureSqlJobTrigger -ScheduleName $scheduleName Write-Output $jobTriggers To retrieve job triggers bound to a job Use Get-AzureSqlJobTrigger to obtain and display schedules containing a registered job. $jobName = "{Job Name}" $jobTriggers = Get-AzureSqlJobTrigger -JobName $jobName Write-Output $jobTriggers To create a data-tier application (DACPAC) for execution across databases To create a DACPAC, see Data-Tier applications. To deploy a DACPAC, use the New-AzureSqlJobContent cmdlet. The DACPAC must be accessible to the service. It is recommended to upload a created DACPAC to Azure Storage and create a Shared Access Signature for the DACPAC. $dacpacUri = "{Uri}" $dacpacName = "{Dacpac Name}" $dacpac = New-AzureSqlJobContent -DacpacUri $dacpacUri -ContentName $dacpacName Write-Output $dacpac To update a data-tier application (DACPAC) for execution across databases Existing DACPACs registered within Elastic Database Jobs can be updated to point to new URIs. Use the Set-AzureSqlJobContentDefinition cmdlet to update the DACPAC URI on an existing registered DACPAC: $dacpacName = "{Dacpac Name}" $newDacpacUri = "{Uri}" $updatedDacpac = Set-AzureSqlJobDacpacDefinition -ContentName $dacpacName -DacpacUri $newDacpacUri Write-Output $updatedDacpac To create a job to apply a data-tier application (DACPAC) across databases After a DACPAC has been created within Elastic Database Jobs, a job can be created to apply the DACPAC across a group of databases. The following PowerShell script can be used to create a DACPAC job across a custom collection of databases: $jobName = "{Job Name}" $dacpacName = "{Dacpac Name}" $customCollectionName = "{Custom Collection Name}" $credentialName = "{Credential Name}" $target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName $job = New-AzureSqlJob -JobName $jobName -CredentialName $credentialName -ContentName $dacpacName -TargetId $target.TargetId Write-Output $job Additional resources Not using elastic database tools yet? Check out our Getting Started Guide. For questions, please reach out to us on the SQL Database forum and for feature requests, please add them to the SQL Database feedback forum.
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-jobs-powershell
2018-05-20T18:04:34
CC-MAIN-2018-22
1526794863662.15
[]
docs.microsoft.com
Query session Applies To: Windows Vista, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012 Displays information about sessions on a Remote Desktop Session Host (RD Session Host) server. The list includes information not only about active sessions but also about other sessions that the server query session [<SessionName> | <UserName> | <SessionID>] [/server:<ServerName>] [/mode] [/flow] [/connect] [/counter] Parameters Remarks: C:\>query session SESSIONNAME USERNAME ID STATE TYPE DEVICE >console Administrator1 0 active wdcon rdp-tcp#1 User1 1 active wdtshare rdp-tcp 2 listen wdtshare 4 idle 5 idle The greater. Examples To display information about all active sessions on server SERVER2, type: query session /server:SERVER2 To display information about active session MODEM02, type: query session MODEM02 Additional references Remote Desktop Services (Terminal Services) Command Reference
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc785434(v=ws.11)
2018-05-20T18:09:18
CC-MAIN-2018-22
1526794863662.15
[]
docs.microsoft.com
Excluding pages from the cache ↑ Back to top If using caching plugins (such as WP Super Cache or W3 Total Cache), make sure you exclude the following pages from the cache through their respective settings panels: - Cart - My Account These pages need to stay dynamic since they display information specific to the current customer. W3 Total Cache Minify Settings ↑ Back to top Ensure you add ‘mfunc’ to the ‘Ignored comment stems’ option in the Minify settings. WP-Rocket ↑ Back to top WooCommerce 2.1+ is fully compatible with WP-Rocket. No extra configuration is needed. All WooCommerce pages are automatically detected and not cached. Varnish ↑ Back to top if (req.url ~ "^/(cart|my-account|checkout|addons)") { return (pass); } if ( req.url ~ "\?add-to-cart=" ) { return (pass); } Why is my Varnish configuration not working in WooCommerce? ↑ Back to top Check out the following WordPress.org Support forum post on how cookies may be affecting your; } # Why is my Password Reset stuck in a loop? ↑ Back to top This is due to the My Account page being cached, Some hosts with server side caching don’t prevent my-account.php from being cached. If you’re unable to reset your password and keep being returned to the login screen, please speak to your host to make sure this page is being excluded from their caching.
https://docs.woocommerce.com/document/configuring-caching-plugins/
2018-05-20T17:30:18
CC-MAIN-2018-22
1526794863662.15
[]
docs.woocommerce.com
Deb-o-Matic¶ What is Deb-o-Matic?¶ Deb-o-Matic is an easy to use utility to build Debian source packages, meant to help developers to automate the building of their packages with a tool that requires limited user interaction and a simple configuration. It provides some useful features such as automatic chroot creation, rebuild of source packages, post-build checks, and much more. It is also extendible with modules that are loaded and executed during the build phases. Why Deb-o-Matic?¶ When the author started to contribute to the Debian and Ubuntu development, he was running a 10-year-old PC and had a poor network connectivity. Downloading lots of packages had always been a nightmare, Canonical’s PPAs were always busy compiling other packages because of the limited resources invested at the time, and wanna-build was (and still is) too complex to set up for relatively simple workflows. A brand new software was created to help building source packages to avoid the burden of the compilation, without wasting too much time configuring complex software to work. Deb-o-Matic was born! A group of Debian and Ubuntu developers started to use it as their primary build machine to avoid playing with sbuild and long builds. Some of them still use Deb-o-Matic to build their packages. Over time, Deb-o-Matic has been used by some FLOSS projects too. For example, Scilab Enterprises uses Deb-o-Matic to build Scilab in a transparent and automatic way. Every 5 minutes, a cronjob checks if any new commit happened and start a built process through Deb-o-Matic.
http://deb-o-matic.readthedocs.io/en/stable/introduction.html
2018-05-20T17:18:56
CC-MAIN-2018-22
1526794863662.15
[]
deb-o-matic.readthedocs.io