content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Security Guidelines for Your IaaS Provider Pivotal Cloud Foundry supports a variety of Infrastructure as a Service (IaaS) providers. Different IaaS providers require different configuration steps to secure user data, identity information, and credentials. Security requirements can vary broadly based on the unique configuration and infrastructure of each organization. Rather than provide specific guidance that may not apply to all use cases, Pivotal has collected links to IaaS providers’ security and identity management documentation. The documents below may help you understand how your IaaS’ security requirements impact your PCF deployment. Pivotal does not endorse these documents for accuracy or guarantee that their contents apply to all PCF installations. How to Use This Topic Find your IaaS provider in the list below. The documentation items linked for each IaaS may help you configure and secure your installation infrastructure. Amazon Web Services (AWS) - AWS Identity and Access Management guide This guide is a reference for AWS’ Identity and Access Management (IAM) features. If you’re new to AWS, start here. - AWS identity documentation - AWS credential documentation This documentation provides a general definition of IAM terms and provide best practices to help you manage IaaS users and permissions. Google Cloud Platform (GCP) - GCP authentication documentation This developer-facing documentation explains general authentication guidelines for GCP. Microsoft Azure - Azure security documentation This site has documentation on Azure security tools. It provides a general guide to how to manage IaaS users and credentials. OpenStack - OpenStack credential configuration - OpenStack credential creation - OpenStack deployment configuration These documents provide a general reference for OpenStack service credential management. VMware vSphere - vSphere Security guide (PDF) This guide contains best practices for securing and managing a vSphere installation.
https://docs.pivotal.io/pivotalcf/1-11/security/security-adjacent/security-guidelines-iaas.html
2017-08-16T15:01:26
CC-MAIN-2017-34
1502886102307.32
[]
docs.pivotal.io
You use the Feather Account activation widget when you want users to first confirm their account and only then activate the account. To do this, you add the Account activation widget on the confirmation page. The confirmation page can be the same page where the Registration widget is added or another page. PREREQUISITES: When configuring the Registration widget, on the Account activation tab, you must select By confirmation link sent to user email radio button. NOTE: You can optionally customize the default widget template or create a new template for the Change password widget. For more information, see Feather: Widget templates. To define additional advanced properties of the Account activation widget, in the widget designer, click Advanced and define the TemplateName. The name reflects your selection in the Template dropdown menu in the Simple view of the widget. To define additional model properties of the Account activation widget, in the widget designer,avigate to Advanced » Model. \define the following: CssClass If you specified a CSS class in the Simple view of the widget, the value is copied in the Model as well. MembershipProvider In MembershipProvider field, enter the name of the membership provider for the user that you use to authenticate in Sitefinity. ProfilePageId In case profile page is specifically selected, the IDs of the selected profile page is populated in that field. Back To Top
https://docs.sitefinity.com/feather-account-activation-widget
2017-08-16T15:03:53
CC-MAIN-2017-34
1502886102307.32
[]
docs.sitefinity.com
The MathJax Community¶. Mailing Lists¶ If you need help using MathJax or you have solutions you want to share, please post to. If you want to follow just our press releases, please subscribe to our press list. Issue tracking¶ upgrade your copy to verify that the problem persists in the latest version. Documentation¶ The source for this documentation can be found on github. You can file bug reports on the documentation’s bug tracker and actively contribut to the public documentation wiki. If you are using MathJax and want to show your support, please consider using our “Powered by MathJax” badge.
http://docs.mathjax.org/en/latest/community.html
2017-08-16T15:14:22
CC-MAIN-2017-34
1502886102307.32
[]
docs.mathjax.org
Provider name Environment Drives Env: Short description Provides access to the Windows environment variables. Detailed description The Windows PowerShell Environment provider lets you get, add, change, clear, and delete Windows environment variables in Windows PowerShell. The Environment provider is a flat namespace that contains only objects that represent the environment variables. The variables have no child items. Each environment variable is an instance of the System.Collections.DictionaryEntry class. The name of the variable is the dictionary key. The value of the environment variable is the dictionary value. The Environment provider exposes its data store in the Env: drive. To work with environment variables, change your location to the Env: drive ( set-location Env:), or work from another Windows PowerShell drive. To reference an environment variable from another location, use the Env: drive name in the path. The Environment provider supports all the cmdlets that contain the Item noun except for Invoke-Item. And, it supports the Get-Content and Set-Content cmdlets. However, it does not support the cmdlets that contain the ItemProperty noun, and it does not support the -Filter parameter in any cmdlet. Environment variables must conform to the usual naming standards. Additionally, the name cannot include the equal sign ( =). Changes to the environment variables affect the current session only. To save the changes, add the changes to the Windows PowerShell profile, or use Export-Console to save the current session. Capabilities ShouldProcess Examples Getting to the Env: drive Example 1 This command changes the current location to the Env: drive: Set-Location Env: You can use this command from any drive in Windows PowerShell. To return to a file system drive, type the drive name. For example, type: Set-Location c: Getting environment variables Example 1 This command lists all the environment variables in the current session: Get-ChildItem -Path Env: You can use this command from any Windows PowerShell drive. Example 2 This command gets the WINDIR environment Variable: Get-ChildItem -Path Env:windir Example 3 This command gets a list of all the environment variables in the current session and then sorts them by name: Get-ChildItem | Sort-Object -Property name By default, the environment variables appear in the order that Windows PowerShell discovers them. This command is submitted in the Env: drive. When you run this command from another drive, add the -Path parameter with a value of Env:. Creating a new environment variable Example 1 This command creates the USERMODE environment variable with a value of "Non-Admin": New-Item -Path . -Name USERMODE -Value Non-Admin Because the current location is in the Env: drive, the value of the -Path parameter is a dot ( .). The dot represents the current location. If you are not in the Env: drive, the value of the -Path parameter would be Env:. Displaying the properties and methods of environment variables Example 1 This command uses the Get-ChildItem cmdlet to get all the environment variables: Get-ChildItem -Path Env: | Get-Member The pipeline operator ( |) sends the results to Get-Member, which displays the methods and properties of the object. When you pipe a collection of objects to Get-Member, such as the collection of environment variables in the Env: drive, Get-Member evaluates each object in the collection separately. Get-Member then returns information about each object type that it finds. If all the objects are of the same type, it returns information about the single object type. In this case, all the environment variables are DictionaryEntry objects. To get information about the collection of DictionaryEntry objects, use the -InputObject parameter of Get-Member. For example, type: Get-Member -InputObject (Get-ChildItem Env:) When you use the -InputObject parameter, Get-Member evaluates the collection, not the objects in the collection. Example 2 This command lists the values of the properties of the WINDIR environment Variable: Get-Item Env:windir | Format-List -Property * It uses the Get-Item cmdlet to get an object that represents the WINDIR environment variable. The pipeline operator ( |) sends the results to the Format-List command. It uses the -Property parameter with a wildcard character ( *) to format and display the values of all the properties of the WINDIR environment variable. Changing the properties of an environment variable Example 1 This command uses the Rename-Item cmdlet to change the name of the USERMODE environment variable that you created to USERROLE: Rename-Item -Path Env:USERMODE -NewName USERROLE This change affects the Name, Key, and PSPath properties of the DictionaryEntry object. Do not change the name of an environment variable that the system uses. Although these changes affect only the current session, they might cause the system or a program to operate incorrectly. Example 2 This command uses the Set-Item cmdlet to change the value of the USERROLE environment variable to "Administrator": Set-Item -Path Env:USERROLE -Value Administrator Copying an environment variable Example 1 This command copies the value of the USERROLE environment variable to the USERROLE2 environment Variable: Copy-Item -Path Env:USERROLE -Destination Env:USERROLE2 Deleting an environment variable Example 1 This command deletes the USERROLE2 environment variable from the current session: Remove-Item -Path Env:USERROLE2 You can use this command in any Windows PowerShell drive. If you are in the Env: drive, you can omit the drive name from the path. Example 2 This command deletes the USERROLE environment variable. Clear-Item -Path Env:USERROLE
https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/environment-provider?view=powershell-5.1
2017-08-16T15:32:27
CC-MAIN-2017-34
1502886102307.32
[]
docs.microsoft.com
Phoenix CloudCache configuration and log files Phoenix Editions: Business Enterprise Elite Phoenix CloudCache log files The Phoenix CloudCache log files contain details of the backups to Phoenix cloud. It also contains details of the local restores from Phoenix CloudCache to Phoenix agents. If Phoenix CloudCache does not function as expected, you can share these log files with the Druva Support team. You can find the log files at C:\ProgramData\PhoenixCloudCache. Phoenix CloudCache configuration file The Phoenix CloudCache configuration file (PhoenixCloudCache.cfg) contains configuration details that Phoenix CloudCache requires to communicate with Phoenix Cloud. Note: We recommend that you do not modify the configuration files except when required. In scenarios where you must modify these configuration files, contact the Druva Support team for assistance. You can find the Phoenix CloudCache configuration file at: C:\ProgramData\PhoenixCloudCache. Phoenix CloudCache service The Druva Phoenix Cache Server service ensures that Phoenix CloudCache is running. Note: If Phoenix CloudCache stops working, the Druva Phoenix Cache Server service starts Phoenix CloudCache.
https://docs.druva.com/Phoenix/030_Configure_Phoenix_For_Backup/100_Phoenix_CloudCache/Manage_Phoenix_CloudCache/030_Phoenix_CloudCache_configuration_and_log_files
2017-08-16T15:12:45
CC-MAIN-2017-34
1502886102307.32
[array(['https://docs.druva.com/@api/deki/files/3643/cross.png?revision=2', 'File:/cross.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/cross.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) ]
docs.druva.com
Many modern web-based solutions make the use of web services, hosted by web servers, to provide functionality for remote client applications. The operations that a web service exposes constitute a web API. A well-designed web API should aim to support: - Platform independence. Client applications should be able to utilize the API that the web service provides without requiring how the data or operations that API exposes are physically implemented. This requires that the API abides by common standards that enable a client application and web service to agree on which data formats to use, and the structure of the data that is exchanged between client applications and the web service. - Service evolution. The web service should be able to evolve and add (or remove) functionality independently from client applications. Existing client applications should be able to continue to operate unmodified as the features provided by the web service change. All functionality should also be discoverable, so that client applications can fully utilize it. The purpose of this guidance is to describe the issues that you should consider when designing a web API. Introduction to Representational State Transfer (REST) In his dissertation in 2000, Roy Fielding proposed an alternative architectural approach to structuring the operations exposed by web services; REST. REST is an architectural style for building distributed systems based on hypermedia. A primary advantage of the REST model is that it is based on open standards and does not bind the implementation of the model or the client applications that access it to any specific implementation. For example, a REST web service could be implemented by using the Microsoft ASP.NET Web API, and client applications could be developed by using any language and toolset that can generate HTTP requests and parse HTTP responses. Note REST is actually independent of any underlying protocol and is not necessarily tied to HTTP. However, most common implementations of systems that are based on REST utilize HTTP as the application protocol for sending and receiving requests. This document focuses on mapping REST principles to systems designed to operate using HTTP. The REST model uses a navigational scheme to represent objects and services over a network (referred to as resources). Many systems that implement REST typically use the HTTP protocol to transmit requests to access these resources. In these systems, a client application submits a request in the form of a URI that identifies a resource, and an HTTP method (the most common being GET, POST, PUT, or DELETE) that indicates the operation to be performed on that resource. The body of the HTTP request contains the data required to perform the operation. The important point to understand is that REST defines a stateless request model. HTTP requests should be independent and may occur in any order, so attempting to retain transient state information between requests is not feasible. The only place where information is stored is in the resources themselves, and each request should be an atomic operation. Effectively, a REST model implements a finite state machine where a request transitions a resource from one well-defined non-transient state to another. Note The stateless nature of individual requests in the REST model enables a system constructed by following these principles to be highly scalable. There is no need to retain any affinity between a client application making a series of requests and the specific web servers handling those requests. Another crucial point in implementing an effective REST model is to understand the relationships between the various resources to which the model provides access. These resources are typically organized as collections and relationships. For example, suppose that a quick analysis of an ecommerce system shows that there are two collections in which client applications are likely to be interested: orders and customers. Each order and customer should have its own unique key for identification purposes. The URI to access the collection of orders could be something as simple as /orders, and similarly the URI for retrieving all customers could be /customers. Issuing an HTTP GET request to the /orders URI should return a list representing all orders in the collection encoded as an HTTP response: GET HTTP/1.1 ... The response shown below encodes the orders as a JSON list structure: HTTP/1.1 200 OK ... Date: Fri, 22 Aug 2014 08:49:02 GMT Content-Length: ... [{"orderId":1,"orderValue":99.90,"productId":1,"quantity":1},{"orderId":2,"orderValue":10.00,"productId":4,"quantity":2},{"orderId":3,"orderValue":16.60,"productId":2,"quantity":4},{"orderId":4,"orderValue":25.90,"productId":3,"quantity":1},{"orderId":5,"orderValue":99.90,"productId":1,"quantity":1}] To fetch an individual order requires specifying the identifier for the order from the orders resource, such as /orders/2: GET HTTP/1.1 ... HTTP/1.1 200 OK ... Date: Fri, 22 Aug 2014 08:49:02 GMT Content-Length: ... {"orderId":2,"orderValue":10.00,"productId":4,"quantity":2} Note For simplicity, these examples show the information in responses being returned as JSON text data. However, there is no reason why resources should not contain any other type of data supported by HTTP, such as binary or encrypted information; the content-type in the HTTP response should specify the type. Also, a REST model may be able to return the same data in different formats, such as XML or JSON. In this case, the web service should be able to perform content negotiation with the client making the request. The request can include an Accept header which specifies the preferred format that the client would like to receive and the web service should attempt to honor this format if at all possible. Notice that the response from a REST request makes use of the standard HTTP status codes. For example, a request that returns valid data should include the HTTP response code 200 (OK), while a request that fails to find or delete a specified resource should return a response that includes the HTTP status code 404 (Not Found). Design and structure of a RESTful web API The keys to designing a successful web API are simplicity and consistency. A Web API that exhibits these two factors makes it easier to build client applications that need to consume the API. A RESTful web API is focused on exposing a set of connected resources, and providing the core operations that enable an application to manipulate these resources and easily navigate between them. For this reason, the URIs that constitute a typical RESTful web API should be oriented towards the data that it exposes, and use the facilities provided by HTTP to operate on this data. This approach requires a different mindset from that typically employed when designing a set of classes in an object-oriented API which tends to be more motivated by the behavior of objects and classes. Additionally, a RESTful web API should be stateless and not depend on operations being invoked in a particular sequence. The following sections summarize the points you should consider when designing a RESTful web API. Organizing the web API around resources Tip The URIs exposed by a REST web service should be based on nouns (the data to which the web API provides access) and not verbs (what an application can do with the data). Focus on the business entities that the web API exposes. For example, in a web API designed to support the ecommerce system described earlier, the primary entities are customers and orders. Processes such as the act of placing an order can be achieved by providing an HTTP POST operation that takes the order information and adds it to the list of orders for the customer. Internally, this POST operation can perform tasks such as checking stock levels, and billing the customer. The HTTP response can indicate whether the order was placed successfully or not. Also note that a resource does not have to be based on a single physical data item. As an example, an order resource might be implemented internally by using information aggregated from many rows spread across several tables in a relational database but presented to the client as a single entity. Tip Avoid designing a REST interface that mirrors or depends on the internal structure of the data that it exposes. REST is about more than implementing simple CRUD (Create, Retrieve, Update, Delete) operations over separate tables in a relational database. The purpose of REST is to map business entities and the operations that an application can perform on these entities to the physical implementation of these entities, but a client should not be exposed to these physical details. Individual business entities rarely exist in isolation (although some singleton objects may exist), but instead tend to be grouped together into collections. In REST terms, each entity and each collection are resources. In a RESTful web API, each collection has its own URI within the web service, and performing an HTTP GET request over a URI for a collection retrieves a list of items in that collection. Each individual item also has its own URI, and an application can submit another HTTP GET request using that URI to retrieve the details of that item. You should organize the URIs for collections and items in a hierarchical manner. In the ecommerce system, the URI /customers denotes the customer’s collection, and /customers/5 retrieves the details for the single customer with the ID 5 from this collection. This approach helps to keep the web API intuitive. Tip Adopt a consistent naming convention in URIs; in general it helps to use plural nouns for URIs that reference collections. You also need to consider the relationships between different types of resources and how you might expose these associations. For example, customers may place zero or more orders. A natural way to represent this relationship would be through a URI such as /customers/5/orders to find all the orders for customer 5. You might also consider representing the association from an order back to a specific customer through a URI such as /orders/99/customer to find the customer for order 99, but extending this model too far can become cumbersome to implement. A better solution is to provide navigable links to associated resources, such as the customer, in the body of the HTTP response message returned when the order is queried. This mechanism is described in more detail in the section Using the HATEOAS Approach to Enable Navigation To Related Resources later in this guidance. In more complex systems there may be many more types of entity, and it can be tempting to provide URIs that enable a client application to navigate through several levels of relationships, such as /customers/1/orders/99/products to obtain the list of products in order 99 placed by customer 1. However, this level of complexity can be difficult to maintain and is inflexible if the relationships between resources change in the future. Rather, you should seek to keep URIs relatively simple. Bear in mind that once an application has a reference to a resource, it should be possible to use this reference to find items related to that resource. The preceding query can be replaced with the URI /customers/1/orders to find all the orders for customer 1, and then query the URI /orders/99/products to find the products in this order (assuming order 99 was placed by customer 1). Tip Avoid requiring resource URIs more complex than collection/item/collection. Another point to consider is that all web requests impose a load on the web server, and the greater the number of requests the bigger the load. You should attempt to define your resources to avoid “chatty” web APIs that expose a large number of small resources. Such an API may require a client application to submit multiple requests to find all the data that it requires. It may be beneficial to denormalize data and combine related information together into bigger resources that can be retrieved by issuing a single request. However, you need to balance this approach against the overhead of fetching data that might not be frequently required by the client. Retrieving large objects can increase the latency of a request and incur additional bandwidth costs for little advantage if the additional data is not often used. Avoid introducing dependencies between the web API to the structure, type, or location of the underlying data sources. For example, if your data is located in a relational database, the web API does not need to expose each table as a collection of resources. Think of the web API as an abstraction of the database, and if necessary introduce a mapping layer between the database and the web API. In this way, if the design or implementation of the database changes (for example, you move from a relational database containing a collection of normalized tables to a denormalized NoSQL storage system such as a document database) client applications are insulated from these changes. Tip The source of the data that underpins a web API does not have to be a data store; it could be another service or line-of-business application or even a legacy application running on-premises within an organization. Finally, it might not be possible to map every operation implemented by a web API to a specific resource. You can handle such non-resource scenarios through HTTP GET requests that invoke a piece of functionality and return the results as an HTTP response message. A web API that implements simple calculator-style operations such as add and subtract could provide URIs that expose these operations as pseudo resources and utilize the query string to specify the parameters required. For example a GET request to the URI /add?operand1=99&operand2=1 could return a response message with the body containing the value 100, and GET request to the URI /subtract?operand1=50&operand2=20 could return a response message with the body containing the value 30. However, only use these forms of URIs sparingly. Defining operations in terms of HTTP methods The HTTP protocol defines a number of methods that assign semantic meaning to a request. The common HTTP methods used by most RESTful web APIs are: - GET, to retrieve a copy of the resource at the specified URI. The body of the response message contains the details of the requested resource. - POST, to create a new resource at the specified URI. The body of the request message provides the details of the new resource. Note that POST can also be used to trigger operations that don't actually create resources. - PUT, to replace or update the resource at the specified URI. The body of the request message specifies the resource to be modified and the values to be applied. - DELETE, to remove the resource at the specified URI. Note The HTTP protocol also defines other less commonly-used methods, such as PATCH which is used to request selective updates to a resource, HEAD which is used to request a description of a resource, OPTIONS which enables a client information to obtain information about the communication options supported by the server, and TRACE which allows a client to request information that it can use for testing and diagnostics purposes. The effect of a specific request should depend on whether the resource to which it is applied is a collection or an individual item. The following table summarizes the common conventions adopted by most RESTful implementations using the ecommerce example. Note that not all of these requests might be implemented; it depends on the specific scenario. The purpose of GET and DELETE requests are relatively straightforward, but there is scope for confusion concerning the purpose and effects of POST and PUT requests. A POST request should create a new resource with data provided in the body of the request. In the REST model, you frequently apply POST requests to resources that are collections; the new resource is added to the collection. Note You can also define POST requests that trigger some functionality (and that don't necessarily return data), and these types of request can be applied to collections. For example you could use a POST request to pass a timesheet to a payroll processing service and get the calculated taxes back as a response. A PUT request is intended to modify an existing resource. If the specified resource does not exist, the PUT request could return an error (in some cases, it might actually create the resource). PUT requests are most frequently applied to resources that are individual items (such as a specific customer or order), although they can be applied to collections, although this is less-commonly implemented. Note that PUT requests are idempotent whereas POST requests are not; if an application submits the same PUT request multiple times the results should always be the same (the same resource will be modified with the same values), but if an application repeats the same POST request the result will be the creation of multiple resources. Note Strictly speaking, an HTTP PUT request replaces an existing resource with the resource specified in the body of the request. If the intention is to modify a selection of properties in a resource but leave other properties unchanged, then this should be implemented by using an HTTP PATCH request. However, many RESTful implementations relax this rule and use PUT for both situations. Processing HTTP requests The data included by a client application in many HTTP requests, and the corresponding response messages from the web server, could be presented in a variety of formats (or media types). For example, the data that specifies the details for a customer or order could be provided as XML, JSON, or some other encoded and compressed format. A RESTful web API should support different media types as requested by the client application that submits a request. When a client application sends a request that returns data in the body of a message, it can specify the media types it can handle in the Accept header of the request. The following code illustrates an HTTP GET request that retrieves the details of order 2 and requests the result to be returned as JSON (the client should still examine the media type of the data in the response to verify the format of the data returned): GET HTTP/1.1 ... Accept: application/json ... If the web server supports this media type, it can reply with a response that includes Content-Type header that specifies the format of the data in the body of the message: Note For maximum interoperability, the media types referenced in the Accept and Content-Type headers should be recognized MIME types rather than some custom media type. HTTP/1.1 200 OK ... Content-Type: application/json; charset=utf-8 ... Date: Fri, 22 Aug 2014 09:18:37 GMT Content-Length: ... {"orderID":2,"productID":4,"quantity":2,"orderValue":10.00} If the web server does not support the requested media type, it can send the data in a different format. IN all cases it must specify the media type (such as application/json) in the Content-Type header. It is the responsibility of the client application to parse the response message and interpret the results in the message body appropriately. Note that in this example, the web server successfully retrieves the requested data and indicates success by passing back a status code of 200 in the response header. If no matching data is found, it should instead return a status code of 404 (not found) and the body of the response message can contain additional information. The format of this information is specified by the Content-Type header, as shown in the following example: GET HTTP/1.1 ... Accept: application/json ... Order 222 does not exist, so the response message looks like this: HTTP/1.1 404 Not Found ... Content-Type: application/json; charset=utf-8 ... Date: Fri, 22 Aug 2014 09:18:37 GMT Content-Length: ... {"message":"No such order"} When an application sends an HTTP PUT request to update a resource, it specifies the URI of the resource and provides the data to be modified in the body of the request message. It should also specify the format of this data by using the Content-Type header. A common format used for text-based information is application/x-www-form-urlencoded, which comprises a set of name/value pairs separated by the & character. The next example shows an HTTP PUT request that modifies the information in order 1: PUT HTTP/1.1 ... Content-Type: application/x-www-form-urlencoded ... Date: Fri, 22 Aug 2014 09:18:37 GMT Content-Length: ... ProductID=3&Quantity=5&OrderValue=250 If the modification is successful, it should ideally respond with an HTTP 204 status code, indicating that the process has been successfully handled, but that the response body contains no further information. The Location header in the response contains the URI of the newly updated resource: HTTP/1.1 204 No Content ... Location: ... Date: Fri, 22 Aug 2014 09:18:37 GMT Tip If the data in an HTTP PUT request message includes date and time information, make sure that your web service accepts dates and times formatted following the ISO 8601 standard. If the resource to be updated does not exist, the web server can respond with a Not Found response as described earlier. Alternatively, if the server actually creates the object itself it could return the status codes HTTP 200 (OK) or HTTP 201 (Created) and the response body could contain the data for the new resource. If the Content-Type header of the request specifies a data format that the web server cannot handle, it should respond with HTTP status code 415 (Unsupported Media Type). Tip. The format of an HTTP POST requests that create new resources are similar to those of PUT requests; the message body contains the details of the new resource to be added. However, the URI typically specifies the collection to which the resource should be added. The following example creates a new order and adds it to the orders collection: POST HTTP/1.1 ... Content-Type: application/x-www-form-urlencoded ... Date: Fri, 22 Aug 2014 09:18:37 GMT Content-Length: ... productID=5&quantity=15&orderValue=400 If the request is successful, the web server should respond with a message code with HTTP status code 201 (Created). The Location header should contain the URI of the newly created resource, and the body of the response should contain a copy of the new resource; the Content-Type header specifies the format of this data: HTTP/1.1 201 Created ... Content-Type: application/json; charset=utf-8 Location: ... Date: Fri, 22 Aug 2014 09:18:37 GMT Content-Length: ... {"orderID":99,"productID":5,"quantity":15,"orderValue":400} Tip If the data provided by a PUT or POST request is invalid, the web server should respond with a message with HTTP status code 400 (Bad Request). The body of this message can contain additional information about the problem with the request and the formats expected, or it can contain a link to a URL that provides more details. To remove a resource, an HTTP DELETE request simply provides the URI of the resource to be deleted. The following example attempts to remove order 99: DELETE HTTP/1.1 ... If the delete operation is successful, the web server should respond with HTTP status code 204, indicating that the process has been successfully handled, but that the response body contains no further information (this is the same response returned by a successful PUT operation, but without a Location header as the resource no longer exists.) It is also possible for a DELETE request to return HTTP status code 200 (OK) or 202 (Accepted) if the deletion is performed asynchronously. HTTP/1.1 204 No Content ... Date: Fri, 22 Aug 2014 09:18:37 GMT If the resource is not found, the web server should return a 404 (Not Found) message instead. Tip If all the resources in a collection need to be deleted, enable an HTTP DELETE request to be specified for the URI of the collection rather than forcing an application to remove each resource in turn from the collection. Filtering and paginating data You should endeavor to keep the URIs simple and intuitive. Exposing a collection of resources through a single URI assists in this respect, but it can lead to applications fetching large amounts of data when only a subset of the information is required. Generating a large volume of traffic impacts not only the performance and scalability of the web server but also adversely affect the responsiveness of client applications requesting the data. For example, if orders contain the price paid for the order, a client application that needs to retrieve all orders that have a cost over a specific value might need to retrieve all orders from the /orders URI and then filter these orders locally. Clearly this process is highly inefficient; it wastes network bandwidth and processing power on the server hosting the web API. One solution may be to provide a URI scheme such as /orders/ordervalue_greater_than_n where n is the order price, but for all but a limited number of prices such an approach is impractical. Additionally, if you need to query orders based on other criteria, you can end up being faced with providing with a long list of URIs with possibly non-intuitive names. A better strategy to filtering data is to provide the filter criteria in the query string that is passed to the web API, such as /orders?ordervaluethreshold=n. In this example, the corresponding operation in the web API is responsible for parsing and handling the ordervaluethreshold parameter in the query string and returning the filtered results in the HTTP response. Some simple HTTP GET requests over collection resources could potentially return a large number of items. To combat the possibility of this occurring you should design the web API to limit the amount of data returned by any single request. You can achieve this by supporting query strings that enable the user to specify the maximum number of items to be retrieved (which could itself be subject to an upperbound limit to help prevent Denial of Service attacks), and a starting offset into the collection. For example, the query string in the URI /orders?limit=25&offset=50 should retrieve 25 orders starting with the 50th order found in the orders collection. As with filtering data, the operation that implements the GET request in the web API is responsible for parsing and handling the limit and offset parameters in the query string. To assist client applications, GET requests that return paginated data should also include some form of metadata that indicate the total number of resources available in the collection. You might also consider other intelligent paging strategies; for more information, see API Design Notes: Smart Paging You can follow a similar strategy for sorting data as it is fetched; you could provide a sort parameter that takes a field name as the value, such as /orders?sort=ProductID. However, note that this approach can have a deleterious effect on caching (query string parameters form part of the resource identifier used by many cache implementations as the key to cached data). You can extend this approach to limit (project) the fields returned if a single resource item contains a large amount of data. For example, you could use a query string parameter that accepts a comma-delimited list of fields, such as /orders?fields=ProductID,Quantity. Tip. Handling large binary resources A single resource may contain large binary fields, such as files or images. To overcome the transmission problems caused by unreliable and intermittent connections and to improve response times, consider providing operations that enable such resources to be retrieved in chunks by the client application. To do this, the web API should support the Accept-Ranges header for GET requests for large resources, and ideally implement HTTP HEAD requests for these resources. The Accept-Ranges header indicates that the GET operation supports partial results, and that a client application can submit GET requests that return a subset of a resource specified as a range of bytes. A HEAD request is similar to a GET request except that it only returns a header that describes the resource and an empty message body. A client application can issue a HEAD request to determine whether to fetch a resource by using partial GET requests. The following example shows a HEAD request that obtains information about a product image: HEAD HTTP/1.1 ... The response message contains a header that includes the size of the resource (4580 bytes), and the Accept-Ranges header that the corresponding GET operation supports partial results: HTTP/1.1 200 OK ... Accept-Ranges: bytes Content-Type: image/jpeg Content-Length: 4580 ... The client application can use this information to construct a series of GET operations ... _{binary data not shown}_ A subsequent request from the client application can retrieve the remainder of the resource by using an appropriate Range header: GET HTTP/1.1 Range: bytes=2500- ... The corresponding result message should look like this: HTTP/1.1 206 Partial Content ... Accept-Ranges: bytes Content-Type: image/jpeg Content-Length: 2080 Content-Range: bytes 2500-4580/4580 ... Using the HATEOAS approach. As an example, to handle the relationship between customers and orders, the data returned in the response for a specific order should contain URIs in the form of a hyperlink identifying the customer that placed the order, and the operations that can be performed on that customer. GET HTTP/1.1 Accept: application/json ... The body of the response message contains a links array (highlighted in the code example) that specifies the nature of the relationship (Customer), the URI of the customer (), how to retrieve the details of this customer (GET), and the MIME types that the web server supports for retrieving this information (text/xml and application/json). This is all the information that a client application needs to be able to fetch the details of the customer. Additionally, the Links array also includes links for the other operations that can be performed, such as PUT (to modify the customer, together with the format that the web server expects the client to provide), and DELETE. HTTP/1.1 200 OK ... Content-Type: application/json; charset=utf-8 ... Content-Length: ... {"orderID":3,"productID":2,"quantity":4,"orderValue":16.60,"links":[(some links omitted){"rel":"customer","href":"", "action":"GET","types":["text/xml","application/json"]},{"rel":" customer","href":" /customers/3", "action":"PUT","types":["application/x-www-form-urlencoded"]},{"rel":"customer","href":" /customers/3","action":"DELETE","types":[]}]} For completeness, the Links array should also include self-referencing information pertaining to the resource that has been retrieved. These links have been omitted from the previous example, but are highlighted in the following code. Notice that in these links, the relationship self has been used to indicate that this is a reference to the resource being returned by the operation: HTTP/1.1 200 OK ... Content-Type: application/json; charset=utf-8 ... Content-Length: ... {"orderID":3,"productID":2,"quantity":4,"orderValue":16.60,"links":[{"rel":"self","href":"", "action":"GET","types":["text/xml","application/json"]},{"rel":" self","href":" /orders/3", "action":"PUT","types":["application/x-www-form-urlencoded"]},{"rel":"self","href":" /orders/3", "action":"DELETE","types":[]},{"rel":"customer", "href":" /customers/3", "action":"GET","types":["text/xml","application/json"]},{"rel":" customer" (customer links omitted)}]} For this approach to be effective, client applications must be prepared to retrieve and parse this additional information. Versioning a RESTful web API It is highly unlikely that in all but the simplest of. Big changes could be represented as new resources or new links. Adding content to existing resources might not present a breaking change as client applications that are not expecting to see this content will simply ignore it. For example, a request to the URI should return the details of a single customer containing id, name, and address fields expected by the client application: HTTP/1.1 200 OK ... Content-Type: application/json; charset=utf-8 ... Content-Length: ... {"id":3,"name":"Contoso LLC","address":"1 Microsoft Way Redmond WA 98053"} Note For the purposes of simplicity and clarity, the example responses shown in this section do not include HATEOAS links. If the DateCreated field is added to the schema of the customer resource, then the response would look like this: HTTP/1.1 200 OK ... Content-Type: application/json; charset=utf-8 ... Content-Length: ... {-fields URL. This can have an adverse impact on utilize a custom header named Custom-Header. The value of this header indicates the version of web API. Version 1: GET HTTP/1.1 ... Custom-Header: api-version=1 ... HTTP/1.1 200 OK ... Content-Type: application/json; charset=utf-8 ... Content-Length: ... {"id":3,"name":"Contoso LLC","address":"1 Microsoft Way Redmond WA 98053"} Version 2: GET HTTP/1.1 ... Custom-Header: api-version}} Note that ... Content-Length: ... { - The Microsoft REST API Guidelines contain detailed recommendations for designing public REST APIs. - The RESTful Cookbook contains an introduction to building RESTful APIs. - The Web API Checklist contains a useful list of items to consider when designing and implementing a web API. - The Open API Initiative site, contains all related documentation and implementation details on Open API.
https://docs.microsoft.com/es-es/azure/architecture/best-practices/api-design
2017-08-16T15:34:59
CC-MAIN-2017-34
1502886102307.32
[]
docs.microsoft.com
Loggregator Guide for Cloud Foundry Operators Page last updated: This topic contains information for Cloud Foundry deployments operators about how to configure the Loggregator system to avoid data loss with high volumes of logging and metrics data. Scaling Loggregator When the volume of log and metric data generated by Elastic Runtime components exceeds the storage buffer capacity of the Dopplers that collect it, data can be lost. Configuring System Logging in Elastic Runtime explains how to scale the Loggregator system to keep up with high stream volume and minimize data loss. Scaling Nozzles You can scale nozzles using the subscription ID, specified when the nozzle connects to the Firehose. If you use the same subscription ID on each nozzle instance, the Firehose evenly distributes events across all instances of the nozzle. For example, if you have two nozzles with the same subscription ID, the Firehose sends half of the events to one nozzle and half to the other. Similarly, if you have three nozzles with the same subscription ID, the Firehose sends each instance one-third of the event traffic. TruncatingBuffer.DroppedMessages. The nozzle receives both messages from the Firehose, alerting the operator to the performance issue. PolicyViolation error: The Traffic Controller periodically sends pingcontrol messages over the Firehose WebSocket connection. If a client does not respond to a pingwith a pongmessage within 30 seconds, the Traffic Controller closes the WebSocket connection with the WebSocket error code ClosePolicyViolation (1008). The nozzle should intercept this WebSocket close error, alerting the operator to the performance issue. An operator can scale the number of nozzles in response to these alerts to minimize the loss of data. Forwarding Logs to an External Service You can configure Elastic Runtime to forward log data from components and apps to an external aggregator service instead of routing it to the Loggregator Firehose. Configuring System Logging in Elastic Runtime explains how to enable log forwarding by specifying the aggregator address, port, and protocol. Using Log Management Services explains how to bind applications to the external service and configure it to receive logs from Elastic Runtime. Log Message Size Constraints The Diego cell emits application logs as UDP messages to the Metron. Diego breaks up log messages greater than approximately 60KiB into multiple envelopes to mitigate this constraint.
https://docs.pivotal.io/pivotalcf/1-11/loggregator/log-ops-guide.html
2017-08-16T14:56:45
CC-MAIN-2017-34
1502886102307.32
[]
docs.pivotal.io
The Email section lets you choose options for sending and addressing email alarms: Email client: Specify the email client to be used to send email alarms: KMail: When an email alarm is triggered, the email is sent automatically using KMail (which is started first if necessary). Sendmail: When an email alarm is triggered, the email is sent automatically using sendmail®. This option will only work if your system is configured to use sendmail®, or a sendmail® compatible mail transport agent such as postfix or qmail. Copy sent emails into KMail's "sent-mail" folder: Select this option if, every time an email alarm is triggered, you want a copy of the transmitted email to be stored in KMail's sent-mailfolder. Note This option is not available when KMail is selected as the email client, since KMail automatically does this. Notify when remote emails are queued: Select this option to display a notification whenever an email alarm queues an email for sending to a remote system. This may be useful if, for example, you have a dial-up connection, or email is queued in KMail's outboxfolder, so that you can ensure that you do whatever is needed to actually transmit the email. Select your email address to be used as the sender's address in email alarms: Select From to enter an email address. Select Use default address from KMail or System Settings to use your default email address which is configured in KMail or the System Settings. Select Use KMail identities to be able to choose at the time you configure an email alarm which of KMail's email identities to use. KMail's default identity will be used for alarms which were already configured before you selected this option. Select your email address to be used for sending blind copies of email alarms to yourself when the Copy email to self option is selected: Select Bcc to enter an email address. If blind copies are to be sent to your account on the computer which KAlarm runs on, you could simply enter your user login name here. Select Use default address from KMail or System Settings to use your default email address which is configured in KMail or the System Settings.
https://docs.kde.org/trunk5/en/pim/kalarm/preferences-email.html
2017-04-23T09:59:57
CC-MAIN-2017-17
1492917118519.29
[array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)]
docs.kde.org
Buck - Crash Last Edit: Apr 28th, 2015 Buck-Crash “This story may not be worthwhile reading for any of the critical literary purests!” I was nicely sleeping in bed when the phone rang. “Morning Frank this is Sheriff Peterson!” “Ah… Sheriff, hey it's four thirty in the morning, what's so important,” I asked groggy as heck? “Have a strange death and I thought the county medical examiner should come in and give me a little advice, please, as like right now,” said the Sheriff! “OK but I am on the time-clock starting now and this had better something really different,” I said showing a bit of disgust because of the early morning waking!" I hung up the phone and walked to the bathroom to do the usual morning hygiene. Half asleep from a shortened night of slumber I warmed a facecloth in hot water to try as cover the face and get my eyes opened. Loving a mystery I had to wonder what in the world Sheriff Peterson could have, that needed my attention at this hour. As medical examiner doctor I deal with death, and never, not once has a client been a no-show for one of my appointments. A quick breakfasting and then the drive into town had me wondering all the more, this being the start of one weird day of uncanny events. When I turned the car to enter the parking lot, there stood Sheriff Peterson waiting for me, standing with hi back to a lighted lamppost. I parked my can in my personalized parking place in the city lot and getting out I was met by a nervous looking sheriff. “Thanks Frank, I really appreciate you....” the sheriff began to speak his apology. “Never mind, I am awake for the day, I am here and on the clock, now let us have a look at is so strange and important,” I said as we walked together into the police building, an d took the elevator to the basement lower level where we entered the city morgue and the exam room. All the while as we walked together I note that Sheriff Peterson, he the sheriff here in the county for some twenty plus years, was on that morning, he looked rather a pale white his facial complexion. Entering the medical exam room the sheriff stopped and stood just inside the doorway to the room. I looked at my good friend of many years, a man who has looked evil, death, and most anything in the face without a shudder. Yet this morning he was scared as Hell! “Let me begin by telling you what I saw when driving my tour of duty last night and into this early morning. I was driving the main road out of town and had passed the old Handoff when for a change of routine I turned off as onto Phelps road and as my cruiser was increasing the speed, this… ah…person ran out from the pucker brush. I hit him at almost 50 and he flew up and landed like a ton of bricks on the trunk of the car.” I was looking at a scared friend when I first peeled back the sheet covering the victim. As I never expected could be real, what was lying there looking quite the decease was a true mystery. I stood and stared at the stiffened expression of surprised on a face I seemed to recall of having know? I turned away from looking at the deceased victim to look over at the sheriff, he standing by the door his eyes closed, was shivering from reasonable fears. I turned again to begin viewing first the head just uncovered as my eyes look at a strange sight, that the face of a local man, he an insurance agency owner. He looked much younger about the face, as was his beard which I knew from seeing him about the town was that of a graying man just turned 53 years of age. He was a Caucasian man but laying there the color of his skin appeared to be darker by several levels of hues. His grayed beard was then changed, being a dark brown with sprinkles of black hairs, was not curly normal facial hair, but straight, fine hair, as almost feeling like it was animal fur. His eyes which I having seen just a month ago, what were blue then were a dark brown. The sense of mystery was just beginning. The victim was partially bald when I saw him walking along the city sidewalk, but what laid there had a head covered in similar brown to black thick hair covering more than the top of his head. The frontal hairline dipped in a “V” of growth as partially covering his forehead. A combing of the thick hairs revealed another reason the Sheriff was acting so disturbed. My victim had two short horns sprouted out the skull at the peak to his forehead. The horns were black and diamond hard, having needle sharp points, similar to the horns seen on Pronghorn deer. A close inspection of the ears on the deceased man were of an oval shape, covered with short hair or fur black of color, protruded some seven inches long and at the ends were pointed. Having seen this man only a week previous of him coming to lie dead on the examination table he could not had had such extensive hair and facial changes completed, and why the would he wish to look like a farm animal? Fearing of some strange chemical contaminate caused such drastic alterations, I donned on a pair of surgical gloves and slipped on my throw-away autopsy suit. Then the thought that what I was seeing might be professionally applied makeup, I took hold of an ear and fully expecting it to pop off it given a slight pull. Nothing happened as the ear stretched but remained in place. I leaned forward and putting on some magnifying glasses, I saw where the ear was attached to the skull, “It… he has, and the ears they are real!” I said, being stunned by what I was seeing. “Strange,” I said as taking the Lab camera in hand and took photographic evidence of a man we knew having had his head partially change to being goat-like. My examination continued as whatever was the stimulant to cause the mouth of the deceased dropped open, and I could have sworn I saw his chest deflate as if he exhaled. Looking close up while wearing the magnifying glasses, the facial hair resembled fur more than it did human facial hairs. As I moved the lips and began inspecting the mouth I made verbal note as to the lips being agilely muscular, similar to those of a herbivore animal. As I went to examine the oral cavity I could not believe my eyes. The teeth were those of some herbivore animal, the form and arrangement. I then pulled the sheet away to see the entire body. “Holy shit Bob, what is this guy?” I stepped two steps back to stand beside then the Sherriff. We stood stunned and amazed looking at what was not our friendly and local businessman, insurance agent. In fact he was quite less than being a human then! I stepped up to the table again as Bob joined me. We marveled at the dark brown fur what covered the deceased from head to his toes. The only area of the body where the brown fur looked as thinner was across the upper portion of his chest. Taking note as to the fur remained thickly grown everywhere else about the torso, the buttocks, groin, and thighs, it shortened and thinned below where he would have his knees, he had animal hocks, with hind feet and cloven hooves like those of a goat, only much larger. Taking a body probe in hand I stroked it over the sheath grown on the lower abdomen as tipping it up, we saw inside the red tip of one very goat buck male member. The probe slipped inside as touched the penis head, and caused the deceased body to jerk, almost enough for it to fall off the exam table. “How is this possible, I saw this man walking on the sidewalk just a week ago, and he did not look like then as he does here and now?” I said, the Sheriff friend placing his hand on my shoulder, a silent way of him agreeing to what I said. Making note as of the hair/fur what grew in the groin, it was a protective layer to give cover to a much larger than is common for a man, and the fur covered testicles were the size of baseballs. I cupped and balanced the orbs in my hands and looked at Bob. “They are dense and must weigh two pounds each,” I said, both of surprise and admiration for the poor dead man. The thighs were also covered in thick dark brown fur and the muscles had become common for an animal walking on all fours. As I moved the hips for a view of the tail hanging out from under the body one arm fell off the table side. I reached to lift it up and noticed the hand. The backside of the hand was also covered in thin layer of dark brown fur, taking note to the fingernails as not those of a human by any means of thought. They were black, as blunt ended, almost square. The thumb had dwindled in size and looked almost as a dewclaw. I turned to Bob and said, “Where and how, this is, was Kyle Rooste, but I never saw anything like this he is straight out of some myth!” As I went to place the hand back on the table the fingers clenched tight with a steel hard gripping of my hand. The strength was like a steel vise and I pulled my arm back trying to break the seized grip. As then the deceased chest moved and the eyes blinked, as Kyle rolled his head toward us and smiled, he then sat up on the table, being then alive! “Gentlemen, I give you my most friendly greetings this fine morning. Oh ah Sheriff, I am sorry that I ran in front of your cruiser, but I was so scared!” Kyle said, but his manner of speech had in it an odd to him a foreign inflection, a twang not common to our way of conversing. Sheriff Peterson stood there and began to draw out his gun from the holster, as Kyle seeing his defensive movement, raised an arm, pointed an index finger at the Sheriff, and said, “Please Bob, Sheriff, if you pull out you gun from the holster you would force me to defend my person. So much is different now, and if forced to, well the two of you with a flick of my finger and you would become as all goat, and on a very permanent basis!” Kyle couched, as hacking tried to clear his throat of congealed mucus for having laid flat for some hours. As he holding his arm outstretched and finger threatening, he turned his attention toward me, and me only. “Mr. Rooste, Kyle, have you any idea what we are looking at right now,” I asked of him, my tone of voice showing the sense of fear that permeated the examination room. Kyle smiled as he looked down to see his feet and then gave a wiggle to a cloven hoof, “Yup, I sure can imagine why Bob over there wanted to draw his gun. Mine is an epic of a story that most would say after hearing of it that I am crazy. Then as when others like you two friends knew the whole of what I became, well, of what happened with Bob this morning and learning I am now an immortal being, having powers you never can cope or understand, I expect most who shall see me will want to run away and hide.” “Kyle, I suggest you stay seated there on the exam table, you had an accident and may have internal injuries.” I suggested, as Kyle smiled a malicious sort of smirked grin, as if he knew something I did not. He sat there on the edge of the exam table, quietly watching what I was doing when he asked for a drink. I seconded the idea, even though I wished it wasn't just water. “OK Kyle,” I replied and walked to the water cooler, where I filled two coffee mugs with the purified water. As I was handing Kyle his mug of water he looking into the cup saw the clear liquid, he did smile. “Frank do us as favor and hold both cups together as touching and let me show you one of my neater new tricks.” Kyle said, as I did if just to see what he had the power to do. Kyle closed his eyes and inhaled a deep breath as if he were focusing his mind to a plan. He exhaled a moment later, opening his eyes as raised his furred hand and with his index finger he did a flicked tink using his black fingernail to the side of one cup. Suddenly the scent of deep burgundy wine began to fill the air, as I looked into the two cups and saw they were then filled with wine. “Kyle how did you…, “I asked, as stood there stunned by what he just did. Kyle smiled and then casually shrugged his shoulders as replied, “Something she taught me to do before we made love!” I looked over at where Bob the Sheriff was then sitting and he shook his head as if not interested. He looked peeved with anger, his face a scowl as he sat watching Kyle and me conversing, the Sheriff having his hand setting on the gun in its holster, as if that would intimidate Kyle being likely an immortal being. Sniffing my cup of then wine, and taking a sip of it, my eyes widened with surprise. “Indeed, that was the richest of flavor wine I had ever tasted!” I announced to all there. “She told me how to take the wine and by ejaculating my semen into it, making the wine a potion, anyone then drinking it would become as Grecian goat satyrs, their forms, and in time their mentality as lust and passions rule their every thought. What day is this,” Kyle said as he began looking around for a calendar? “Saturday September 23,” I replied quickly, “As why?” “Because dear friends, it was just Thursday when the terse Mrs. Kelly came into my office. She had the specs on the old mansion which she was going to buy and rebuild into a bed and breakfast. You both know how tough that ole' lady can be, heck she's been married six times! Well she wanted my opinion on the value of the structure. After some arguing I agreed to meet her at the property and give her my educated opinion as to its insuring value. As you both know that place has been vacant for years, and nobody goes there! It was ten in the morning and we met outside the front iron gates. I looked at the huge old structure and thought it reminded me of the Munster's house on TV. We walked up the stone walk to the front porch, and she having keys unlocked those big oak doors. The interior of the mansion was remarkably cleaner than what I expected. I walked and checked every corner for any signs of infestations, termites and the like. The tall ceilings and that curving stairway to the upstairs were in remarkable condition. That was when we both heard the deep humming and the floor vibrated! We walked around trying to discover the cause, until seeing smoke coming from under a service door leading to the staircase down into the basement. As I opened the door I saw a bluish dull glow coming from where it should be a dank humid basement. The old bitty started to bitch about what it could be and insisted we go and check. I suggested of a want for calling the police and or fire department. She pushed the issue and we walked down into the blue haze, considering of what you see as being me, I should have stuck to my idea and made the emergency call! The ole' gal pushed me ahead of her and down to the basement we went. I had a flashlight but by the time we reached the basement floor the blue light illumination was enough for us to see just fine. We walked around a block wall that was built to house the thing making the brighter blue light rays,” Kyle was explaining. “Out, out of what,” yelled Bob! “A wall of twinkling lights set into stone did some of the cast bluish white light all around in the mansion basement. In the center of it was, we saw an odd oval shaped hole emitting steam into the basement, it trailing about onto the concrete floor before dissipating. It hummed as of operating by some great power, the same vibrations we felt in the upper floor, as we felt there the vibrations in the concrete floor. Then Misses Kelly said something, as she with her terse attitude toward anything she could not understand, always asked a question. She asked as what the hell was the thing? That was when we heard the voice, it a feminine sounding voice, reminiscent of a Bell telephone service operator, someone with verbal phone manners, you can remember! The voice came from within the strange thing. It said it was a portal to past, present, futures to events at places where we would meet people of renown. It began to emit a denser cloud of steam while responding to questions. I felt a sense of fear asking, wondering when it had been used last. It offered forth an answer with a vision mingled in the cloud of steam coming from out the center of the oval shaped portal. As Misses Kelly stood to my left and me being a half step closer to the portal, we saw then the kids who came, as found this thing and walked back into ancient history. “Behold the five younglings who came here previous your coming, and they asked to meet a great person with magical knowledge.” It said. As I asked it as where the kids were? The Portal answered me by telling us there of just who they met in history was a woman of magical renown. Mattering by how each youth acted or what was said determined their outcome, making their return as an unlikely event. She the woman I would get to meet as well, she used her magic to change each of the five children into bodily forms of advantage to the witch and some planned purpose. That old bitty Kelly was getting angered by my questioning, as she wanted to know if it was or could become some attraction for her bed and breakfast there. The thing ignored Misses Kelly and her question. As I asked the thing, Portal about the kids and who as where they went going thru the portal, as met with…? As then the reply being the kids met a witch of historical renown, her name was Andorra! At that moment was when Misses Kelly acting her usual bold style manner, asked for proof, she wanted to see where that place was? The steam cleared and we saw a mansion of white marble with beautiful landscaping. Brick pathways went everywhere and we saw a woman, she was beautiful, having long black hair, and was standing as waving, beckoning of us to come. The old bitch of a woman gave me a push and I fell head-long into the hazy scene of that long past place in time. As I fell it felt like I was swimming in gelatin, until I slammed down on my knees, awake and there kneeling before that woman standing, smiling down at her prey there on the marble step. I looked up at her; she smiled and offered me a hand up to standing before her. “Please hear me,” I said asking of her, “As I am no great student of history but recall having read about Circe being a Grecian witch to be reckoned with, and had terrible powers.” “Ha,” she said, “I am not Circe, though I know her, my name is Andorra. You are now a person alive in my realm and time, this is my island, my kingdom, and your only connection from here to where you were is that portal through time, I'm pleased to have a new visitor, a mature man might be different, someone with manners of how best to treat a woman.” said Andorra. My first thought was for safety and keeping my self physically intact and human! Then an idea formed in my mind, since Misses Kelly was watching and could hear what was said, I looking back over my shoulder said, “Please, how I would love to see all your wealth of gold and diamonds!” Andorra stared into my face with a questioning look, as seeing me smile. Andorra looked up and away, trusting me as we heard the sound of a screaming Banshee. It was Misses Kelly falling though space and time until with a dull thud she landed on her fat butt looking first at me and then turning to look at Andorra. “Ah, this is the she you mentioned, she who pushed you into the time portal, and you then baited her to come here and take of my attained wealth,” said Andorra, she with a twinkle in her eye, smiled at me with a sense of admiration. Misses Kelly got up quicker than I expected a woman in her late eighties might be able. Andorra offered then a personally conducted tour of her palace. Indeed her taste in décor was something special and splendid. There was one not so splendid portion of the tour, that when we were close to what she the witch said was the pens and pigsty. I suspected the pens and sty were where she put those condemned to live their lives, moaning and/or with squeals of frantic minds forced to live on in vile as bestial forms and a lifestyle. Andorra saw my looking down that hallway toward the horrible sounds of beasts in anguish, as she took me by the hands and said, for me such was not a certainty. My hearing that coming from her I felt bound to ask the greater question. I asked then if she hold a hate for men, similar to Circe as history suggested about her. She smiled and asked me straight out, if I could be an animal which one kind was my favorite fantasy a choice? I tried to change the subject and asked instead as of more about the five kids who came there to meet a witch. Andorra just smiled at me, as said her question needed an answer, either then or later. She then motioned for us to follow her along then that hallway leading closer to the screams, moans, and squeals of those she had damned. “If you changed the kids to forms of animals, were they allowed to return to their, our time?” I asked, as Andorra walked ahead of us and she saying nothing. Circe walked us to a balcony overlooking the pens and the sty below, as she pointed to some corrals a far distance away from the palace. “There is where I kept them until I was sure they could return safe to their own time. The only reason for my need to make changes to their forms was to insure their inability to tell anyone about me, or how to come here!” In the distance were a number of pens or corrals. These pens had in them a few animals of varied species, some of the African wilds, some of the more domesticated breeds. “Four of the five I sent homeward, but one I kept. See there to the nearer pen left from all the others, the Water Buffalo, his name when human was Tyler. He was a teenager of those who came here, being quite brash and disrespectful, poor manners. When he got caught chasing a heifer, I told him he was a bad boy. He sneered at me and holding up both of his hands showing but one finger of each toward me, I asked the others there as what his sign language meant. I received two answers, as both of them suggested his lacking of worth for remaining human. One meaning had to do with his mental intelligence quota, and the other meaning suggested was of an indecent manner of sexual play. Neither answer suggested he was a person of worth to send homeward. I treated him to his youthful fancy for the sensual and during his frolicking. A young satyr added a poison to what the lad was drinking. The next morning young Tyler awoke to the idea he fostered in me as how best he should be treated. Pleading with the same sense of horror that those who went into the sty would do, Tyler awoke being black of skin and fur. He had grown over the first night a bull tail, ears, and the yearling size of bovine genitals. By the end to his second month after becoming a bull buffalo, he never accepted the reality as what for him is his way of life. The others left here as different common farm animals all, the youngest boy age nine, Dillon went home as a Percheron colt. Another boy age eighteen told his friends how he would relish being a horse for its sexuality. He was all about sex and spoke of little else while residing here. I no doubt pleased him as he returned human with a personal curse. As of whatever horse he should touch or pet and he of body would merge with the animal, becoming the male member of the stallion or gelding. He returned home through the portal and seeing Dillon, he began to his young friend and oops! Johnny received his delighted fun fantasy horse colt Percheron male member; he became merged bodily and mentally with Dillon. Though he cannot see or smell anything, he knows quite well the truer taste of being a horse. As from urine and semen flowing out his round mouth, he has a tongue, and on occasion, if standing close to Dillon after urinating, you could hear his penis exclaiming its thoughts in words as to living life for sensuality, and sexual delights. Oh too, he can hear, and understands as thinks still like the brash boy he was and shall never be again. The one girl with them when they arrived was of the name Alice. She was a very considerate girl, fourteen years of her age. When and later after they arrived and I felt unsure what to do with them, I asked her to serve in my palace. Delighting both her, and I she accepted and worked well with my male and the few female satyrs I created. When came her time to decide and leave here, she asked to stay, feeling a sense of affection for…, me! Johnny reminded Alice of her mother being worried. I had allowed them to remain here as entertainment to me, they being my young and impetuous guests. Andorra sent Alice homeward though the time-portal agreeably as a mute and mutant her from, being a female satyr form physically in reverse. As from her udder, vulva, and tail and up she was a female goat, her legs and feet remained human for a while until she would decide to mate with a buck goat. Andorra commenting on what she did to assure those returning would never tell as of what they saw and learned. She did for me what you see and more, she made me as her, an immortal being, granting me some of her power to change then others who asking, learn too much and become for my existence as distinct liabilities. My friends, I am happily a satyr-and-goat, being more goat than of anything resembling a human. Andorra made me firstly a satyr and as time spent there in serving her matters nothing as to when returning home. I remained there being her fun consort and a pet, during some twenty-six years I serviced her need of sensuality. After giving to her whims of daily pleasuring, it was as if we were married. It was a strange relationship of woman and her almost all animal mate, while we had other matings, with other loves. Andorra finally became tired of her sex-pet and made me an offer, something she rarely did to any of her lovers. Most of her lovers went on to servicing the livestock used for food, as did she offer that to me or allowing my returning home, of the later I accepted. She arranged the phased limits to the time-portal, returning me back to within a week from the day I was pushed into the portal opening. Andorra blessed me before we parted company, granting with powers that I do not fully understand how best to use. I can with a great summoning of inner will power cause my body to change, as become a large breed of a male goat to retain a presence of anonymity. I was a big buck goat when Bob in his police cruiser struck me. My becoming as if dead returned me to this my original form as a satyr-goat. As sometime today I shall use my power again and decide to either erase all memories of me in your minds, or, if asked, I can grant you a wondrous new life being an animal. Bob, you were asking about Misses Kelly, why you would hold any worries for her safety, the old bitty! When we had just arrived and Andorra noted the different attitudes, that of an old and greedy woman, or a willing man to do as asked, whatever she asked, Andorra accepted me as her source of information about Misses Kelly. I sold her on the idea to keep Misses Kelly there. I told Andorra as of Misses Kelly and her inclination to covet and take what was not hers to own. I mentioned, and Andorra smiled when I suggested how a witch could curb the ambitions of Misses Kelly. I even suggested she keep the old bitch doing something for Andorra to gain profit. I smile and remember with a fondness how Andorra scowled at me, and as she thought about my suggestion, her expression changed to a cute grin. Andorra called one of palace mistress female satyrs to escort Misses Kelly to where she could get a personal up-close view inside gem mine. Then the old bitty heard that she became all giggly with a smile as if elated. I saw from experience and that look of greed on her face, as did Andorra. Wicked was the facial expression of Misses Kelly as she was scheming, it told much of what Andorra could expect. Andorra held the hand of a cute as arousing to me a female satyr and then took my hand and pressed them together. As I became included into part a spell when satyr Carol would return from her duty with Misses Kelly, Andorra wanted Carol to relate her very personal and physical needs with me. Andorra looked at me and smiled, told me to enjoy a day and night with a lusty lover to prepare me for my duties of knowing a witch woman in very carnal ways. Satyr Carol was ordered to give Misses Kelly a taste of the mountain spring water when they arrived to the mines. Told to leave the old woman there, allowing her time to learn of new ways to enjoy gems and Gem. I heard the word gems and then gem as if it were the name of somebody. Andorra noted my catching her phasing and smiled, to then inform me that Gem was a large male donkey using his equally large maleness to keep the jennet donkeys there in the mine as elated and happy. After satyr Carol led Misses Kelly along to the mountain mines, Andorra began laughing, she explained, as one drink swallowed of the spring water was poisonous, enriched with special body rendering gifts. Any person who tasted the water, their inner qualities as such the greed of Misses Kelly would feel a delight by being there in the mine. As she would change so would her mind, becoming as willing to be of help to the miners, unknowingly changing until she completed the transforming being a donkey jennet. The spring water would rejuvenate her body and give her strength and youth. As from one sip swallowed, she would begin to change. By the rising of the sun in the east she would realize her joining the herd of jennet donkeys working there in the mine. Andorra said she expected Misses Kelly to act angrily about her becoming an animal, as most women changed into any animal form tried to fight their heightened bestial inclinations. The rejuvenation to being young again, and the donkey jenny sexual arousals, the winking action of her big vulva, her vagina Andorra explained the term to me, told too of Misses Kelly becoming a donkey age of a two year old. She was to be at work in the mines for some twenty years, as by then only the brute animal mentality would be her! Andorra was not without her own thoughts of gaining profit from her herd of female donkeys. As the donkey stud named Gem would breed with a rotating choice of jennet donkeys, continuing the droppings new foals! The whole thing seemed so righteous and proper for our Misses Kelly considering she had so many husbands who died under strange circumstances! And so then Bob, as during my twenty-six years of being what I became, our Misses Kelly did her fun duties and worked the gem mine, dropped nine donkey foals. My last contact with her came when Andorra rewarded jennet Kelly with a secondary transforming, changing her into a horse mare to pull with three other mares the carriage Andorra and I rode about her kingdom. As of what you see of me, my wild and fun night with satyr Carol made me to looking as I am, and me then a male satyr. While Carol sat on me and rode me like she does her stud ponies, she told of what Andorra expected of me while I should be a servicer in her palace. I became then as a satyr, part goat, mostly still manly. As after I met and did mate with Andorra she decided how to better make me more animal, more goatish, than being mostly as manly. She was quite nice how she made the physical changing of me to this more goatish form. I served in her palace for years. Remember since this was all in the history of earth and back in time a year is less than a tick on the great clock. While the waiter to Andorra by day, I was every night her wild lover. She would offer me a drink, as in time I learned it was a rutting potion, putting my mind and body into a frenzy of sexual rut desire for any female. I got a taste for the rutting potion and used it carelessly, mating with a few of the hornier satyr females. If I used too much of the rutting potion, I felt a bestial inner need to procreate. That's have sex Bob, I see he's still not in the main stream of this story since his gun is now out of its holster and he has it pointed at me, the hammer as cocked. Well I had a tough time keeping my hands off those pretty satyrs. They were used by the occasional men who wandered to that isle of sex and sensuality. I had one satyr her name was Beverly, she was so nice! I was taking a platter of drinks to Andorra and a few guests when Bev bent over to pick up a broken glass dropped by a changing guest. My sense of just filled my mind as it blocked out any human moral thought. My goat cock jutted out stiff as you could not imagine! I stepped close and entered her sweet pussy, began then humping her. She was surprised when I mounted her and thrust in my ramped cock, as she looked back at me with a smile, and let me have my pleasure. I was deep in her alluring spell. I finished my third load into Beverly, and stood there with my cock in hand, looked up and saw an angered Andorra and her guests, they having a scared look. With then my semen still dripping from the end of that long red cock, Andorra angered, she ordered me to drink one of her potions, changing me into all-goat, I was banished to reside at the goat farm nearer the base of a live volcano for as long as it took me to realize I was better when a satyr and not a sex crazy buck goat. I remained there at the farm for almost two years of twenty-four-seven mating with hundreds of equally horny doe goats. When Andorra had me returned to the palace, she tried to make it sound as if I had felt a sense of repentance for my brash sex act before her guests or victims. I quickly caught the drift of what she was trying to do, as for many years I listened to insurance customers swear the collision as the other person was at fault. She changed me mostly back to looking like a satyr, adding much more fur and she liked fur, loved to comb through it with her nimble fingers. My term as he lover began in earnest then and continued until the night before last when she just outright asked me if I wanted to return to my home and times. I agreed, promising not to tell about her and how to get to her through the time portal. My getting nearly killed by Bob meant I had to break that promise and do some explaining.” Ken said, he doing some major explaining of where he went, what happened, and why we did not see Misses Kelly looming about town. “Bob,” I said as Ken was silent but looking at me with an odd facial expression. “I think before anyone comes in to work, I should take satyr-Ken home with me! Since I live alone he can stay hidden until we can if possible, figure a way to get him normal again, maybe?" “OK, but what happened to Misses Kelly,” asked Bob, he his mind full of details? “Bob, she is still back in time being then as there a horse mare. One thing is for sure she won't be a pain in the ass for this community any more,” said Ken with a chuckle. I helped Ken into my car and then drove to my very rural home. On the way I asked Ken, “Do you still have strong sexual lusts, did that carry with you returning home?” Ken smiled wide, “Oh tell me Frank you are gay!” “No buddy I am not, but I need to plan for your new form and any animal like strange attitudes which might surface, I am a doctor.” I said, but seeing Ken having a small grin as if he were doing some plans of his own. The trip was a quiet one as Ken shut up and sat fondling his hairy sheath. I watched him rub pointy head of his red cock until we turned into my driveway. We were quick to get him inside and offered he could shower if he felt the need. Ken agreed and I went to the kitchen to prepare breakfast as he went to the bathroom. An hour and Ken stepped out of the bathroom. His being wet made his muscular form not covered with fur to send a nervous chill down my spine. The sight of this sex god would send any woman into a lust for an orgy. Yet for some strange reason I envied him, and when he came close to me I felt a need to stroke his animal fur. Breakfast took some time as Ken polished off two family sized boxes of oat cereal and a gallon of whole milk. As I started to wash the few dishes from breakfast, I felt Ken step near as if to look over my shoulder. I remember his words, “Come with me, and let us romp on your bed my love!” I felt his hairy palms touch my face and all went black. I had a dream of the worst sensual orgy, something never before in my life. My mind said this was so bad while my lusts said it felt so, so, so good. I began to awaken as thoughts and memories of standing in the kitchen when.... I opened my eyes and looked up at a smiling hairy face of Ken leaning over my rump and back, his red goat cock shoved up my anus. “Ouch…, Ugg-yuck, why do I have a salty taste of something in my mouth, was I sucking on your red cock? I said, as I could not believe where I was, kneeling before a large male satyr with his salty red cock spitting out satyr semen up my butt. I tried to move away, and with a harsh jerk I yanked his cock from out of my anus and cursed at him for what he had made me do. All Ken did was smile as I ran off to the bathroom in hope of puking or draining my bowels. Try as I did nothing came up or out. I returned to the bedroom about to give a piece of my mind when the phone rang. "Frank, this is Bob, ah could you two fellows meet me at the old Mansion, I have a report of someone heard screaming in there!" I agreed and started to redress as Ken began to talk in a soft manor again. “Stay away from me, I am not your kind, I feel angry and well used!” I said! I was in the doorway to the garage when Ken ducked under my left arm and stood directly in front of me. "I loved our time together let us do it again, now!" Before I could step back he touched his hairy hands to my face and all went black again. Like a dream it was yet I knew it wasn't a dream but could not wake up or move away. I was on my knees and Ken was humped over my back, he thrusting his cock deep into me but not up my butt. What he was talking about him and me was as if I were a female being fucked by a goat. As much as I wished to get away, my inner lust was of a bestial desire longing for a goat with a bigger cock. The more Ken thrust, and time after time unloaded into me, I felt my sense of lust increase, I wanting more even more! Ring, ring went the phone I awoke and rolled over on the bed a picked up the receiver. "Hey, where the hell is you guys it's been over two hours, get over here now,” yelled Bob the Sheriff!" I apologized and rolled up to a sitting position on the bed. I was again naked as Ken had somehow put me in a trance and then played with my body. I felt a funny feeling as something strange and looked down to my groin. Expecting to see there of two tennis sized balls in a light brown fuzzy covered sack. My groin was smooth with fur surrounding my changed to being womanly a vagina. I was slimmer, shapely, having the breasts of a young woman; they tapered up to large, erect nipples. The brown goat fur grew from many places but most prevalent as from my navel down to heavy shaggy fur about my butt and thighs. “What the Hell is this,” I yelled? “Frank…, no Francine is your new name, remember it!” Ken said, and as if a spell entrapped my mind, I forgot my ever being a man then and began acting as a female satyr in sexual heat. “Francine, my dear and such a wonderful lover, we made love twice and if three times the new you will become the permanent you to join with me as a satyr female, a Satyress. We shall be as lovers and roam the countryside mounting and increasing our herd of males and females.” Ken said as I swooned for his every word, accepting what I was and finding new ways of appreciating being for Ken his Satyress. An hour later we arrived at the mansion, as neither of us being satyr could drive a vehicle, we ran the six miles across country to meet Bob waiting at the mansion. When Bob saw first Ken run up and then me being a female satyr, he did not waste time to ask any questions. Bow led the way up the path from the front gates to the front porch of the mansion. Ken strode past Bob and as I tried to walk past, Bob reached up a hand and gave my left breast-teat-nipple a slight twist to arouse it further. I tried to ignore Bob, and I walked in the open front doors of the mansion. The sounds of a human screaming, and of an animal in pain doing about the same, the noise was coming from the mansion basement. We quickly made our way down the stairs. Turning the corner we stopped and looked in amazement. Standing there kicking with hind legs and shod hoofs was a white mare taking her hate out at the time portal. “Misses Kelly,” we three all said to once! The horse turned to look at us, her face was not that of a mare, but that of Misses Kelly merged neatly as if properly a part of the horse head. Her body was that of a Percheron mare, her stubbed tail swishing her anus and vulva. “Damn it, look at me, this is what that evil bitch she sent me back home being an animal!” Misses Kelly remarked, the tone in her voice said how mad she was, but me looking at her being a mare, she never looked better. “Andorra told me she did, that I would return and those who greeted me would stay by my side for the rest of their lives! I don't know what to think, as Bob is still the sheriff, and Ken my insurance agent is a satyr, then she, oh damn, she was a he and he was my insurance agent!” Misses Kelly stated, she knew us as we her. Remembering what Ken told of his story I thought to try and ask the Time Portal a question. “Portal, can these people go back to a time just before their change was done to them and then request their return here to be again as normal people?" Steam rolled out the front of the blue portal and then it answered. “Yes, my sir it could be done, a small chance remains for them to be human. A better chance that they would remain there, being as the beasts they are and never return here again!” I turned to Bob and whispered a thought. Bob told Misses Kelly to face the portal, and she did as told. I took careful hold of Ken and asked the portal to arrange its scene to isle just before their present changes happened. Then I looked at sheriff Bob and said, “One Two Three, and Bob give them a shove!” I shoved satyr Ken into the portal as Mare Kelly screamed and leaped into the portal. I turned and looked at Bob, he standing there with his Taser in hand, as he turned with a smile, said, “I shot the mare a Taser blast in her black vulva, and she leaped into the portal!” “Is all of time now as it was before,” I asked the Portal. A moment later and the Portal replied saying, “All is no not quite the same as it was yesterday, the satyr made a ripple in time!” “A ripple,” I asked, “What is that?” "The ripple is not a what, but a who! The satyr mated with you, and you became both female your gender and a satyr your species, today you will change the history of another!” Said the Portal voice, it speaking of me and what I would do, being a female satyr with sensual infecting power. Bob looked at me as asked, “You had sex with Ken, that monster?” “Ken did not ask and gave me no choice. He had an ability to put a person in a trance with his touch. He made me suck his cock and then he fucked my butt. Each time and several times he unloaded into me his semen, I was helpless!” I replied. Bob asked the portal, “Is your time sent still for the witching isle?” “I await your entry and will oversee your pleasure,” said the portal. I was looking at the scene in the open void of the Portal, seeing below as the Mare Kelly was kicking the shit out of Ken. Out of a corner of my eye I saw Bob start to run and figured he wanted to give me a push into Andorra’s witchery realm. At the last second I stepped to the right and watched my school chum and local Sheriff disappear into the void. A holler and a dull thud I heard, that coming from the witch Andorra who Sheriff Bob fell on as she was standing on her palace walk. “Time has changed but little, as the man who just entered shall return to regain his full life here,” stated the Portal! My thoughts seemed clouded and disinterested in Bob, Ken and Mare Kelly. I felt a longing and a real need to have sex. I returned to the mansion basement stairs, as climbing them my breasts swayed, tail wiggled, and I felt so much like a horny female goat down low, that reaching the top step I felt more and more my desire for wild sex. I ran toward my house and safety. After arriving there I looked at the phone and there was a message waiting. The message service light was flashing and I pushed the button. “Hi Frank… you dirty bastard. I got back and we are due for a little talk! It can wait until morning. We need to meet at the mansion about nine, so until then missy-buddy!” Bob had me worried and before going to bed I sat for a long while using a pop bottle to stimulate my new vagina. I had a thought to torch the old mansion and close the Portal from bothering my life again!
http://docs-lab.com/submissions/586/buck-crash
2017-04-23T09:51:28
CC-MAIN-2017-17
1492917118519.29
[]
docs-lab.com
This document briefly outlines the structure of the tool system in the GAL canvases. The GAL (Graphics Abstraction Layer) framework provides a powerful method of easily adding tools to KiCad. Compared to the older "legacy" canvas, GAL tools are more flexible, powerful and much easier to write. A GAL "tool" is a class which provides one or more "actions" to perform. An action can be a simple one-off action (e.g. "zoom in" or "flip object"), or an interactive process (e.g. "manually edit polygon points"). Some examples of tools in the Pcbnew GAL are: There are two main aspects to tools: the actions and the the tool class. The TOOL_ACTION class acts as a handle for the GAL framework to call on actions provided by tools. Generally, every action, interactive or not, has a TOOL_ACTION instance. This provides: pcbnew.ToolName.actionName, which is used internally to dispatch the action AS_CONTEXT, when the action is specific to a particular tool. For example, pcbnew.InteractiveDrawing.incWidthincreases the width of a line while the line is still being drawn. AS_GLOBAL, when the tool can always be invoked, by a hotkey, or during the execution of a different tool. For example, the zoom actions can be accessed from the selection tool's menu during the interactive selection process. AF_ACTIVATEwhich indicates that the tool enters an active state GAL tools inherit the TOOL_BASE class. A Pcbnew tool will generally inherit from PCB_TOOL, which is a TOOL_INTERACTIVE, which is a TOOL_BASE. In the future, Eeschema tools will be developed in a similar manner. The tool class for a tool can be fairly lightweight - much of the functionality is inherited from the tool's base classes. These base classes provide access to several things, particularly: wxWindow, which can be used to modify the viewport, set cursors and status bar content, etc. getEditFrame<T>(), where Tis the frame subclass you want. In PCB_TOOL, this is likely PCB_EDIT_FRAME. TOOL_MANAGERwhich can be used to access other tools' actions. EDA_ITEM) which backs the tool. getModel<T>(). In PCB_TOOL, the model type Tis BOARD, which can be used to access and modify the PCB content. KIGFX::VIEWand KIGFX::VIEW_CONTROLS, which are used to manipulate the GAL canvas. The major parts of tool's implementation are the functions used by the TOOL_MANAGER to set up and manage the tool: pcbnew.ToolName. Init()function (optional), which is commonly used to fill in a context menu, either belonging to this tool, or access another tool's menu and add items to that. This function is called once, when the tool is registered with the tool manager. Reset()function, called when the model (e.g. the BOARD) is reloaded, when the GAL canvas is switched, and also just after tool registration. Any resource claimed from the GAL view or the model must be released in this function, as they could become invalid. SetTransitions()function, which maps tool actions to functions within the tool class. TOOL_MANAGERin case an associated event arrives (association is created with TOOL_INTERACTIVE::Go() function). SetTransitions()map. The action handlers for an interactive actions handle repeated actions from the tool manager in a loop, until an action indicating that the tool should exit. Interactive tools also normally indicate that they are active with a cursor change and by setting a status string. int TOOL_NAME::someAction( const TOOL_EVENT& aEvent ) { auto& frame = *getEditFrame<PCB_EDIT_FRAME>(); // set tool hint and cursor (actually looks like a crosshair) frame.SetToolID( ID_PCB_SHOW_1_RATSNEST_BUTT, wxCURSOR_PENCIL, _( "Select item to move left" ) ); getViewControls()->ShowCursor( true ); // activate the tool, now it will be the first one to receive events // you can skip this, if you are writing a handler for a single action // (e.g. zoom in), opposed to interactive tool that requires further // events to operate (e.g. dragging a component) Activate(); // the main event loop while( OPT_TOOL_EVENT evt = Wait() ) { if( evt->IsCancel() || evt->IsActivate() ) { // end of interactive tool break; } else if( evt->IsClick( BUT_LEFT ) ) { // do something here } // other events... } // reset the PCB frame to how it was when we got it frame.SetToolID( ID_NO_TOOL_SELECTED, wxCURSOR_DEFAULT, wxEmptyString ); getViewControls()->ShowCursor( false ); return 0; } Top level tools, i.e. tools that the user enters directly, usually provide their own context menu. Tools that are called only from other tools' interactive modes add their menu items to those tools' menus. To use a TOOL_MENU in a top level tool, simply add one as a member and initialise it with a reference to the tools at construction time: TOOL_NAME: public PCB_TOOL { public: TOOL_NAME() : PCB_TOOL( "pcbnew.MyNewTool" ), m_menu( *this ) {} private: TOOL_MENU m_menu; } You can then add a menu accessor, or provide a custom function to allow other tools to add any other actions, or a subset that you think appropriate. You can then invoke the menu from an interactive tool loop by calling m_menu.ShowContextMenu(). Clicking on the tool's entry in this menu will trigger the action - there is no further action needed in your tool's event loop. The COMMIT class manages changes to EDA_ITEMS, which combines changes on any number of items into a single undo/redo action. When editing PCBs, changes to the PCB are managed by the derived BOARD_COMMIT class. This class takes either a PCB_BASE_FRAME or a PCB_TOOL as an argument. Using PCB_TOOL is more appropriate for a GAL tool, since there's no need to go though a frame class if not required. The procedure of a commit is: COMMITobject Modify( item )so that the current item state can be stored as an undo point. Add( item ). Do not delete the added item, unless you are going to abort the commit. Remove( item ). You should not delete the removed item, it will be stored in the undo buffer. Push( "Description" ). If you performed no modifications, additions or removals, this is a no-op, so you don't need to check if you made any changes before pushing. If you want to abort a commit, you can just destruct it, without calling Push(). The underlying model won't be updated. As an example: // Construct commit from current PCB_TOOL BOARD_COMMIT commit( this ); BOARD_ITEM* modifiedItem = getSomeItemToModify(); // tell the commit we're going to change the item commit.Modify( modifiedItem ); // update the item modifiedItem->Move( x, y ); // create a new item DRAWSEGMENT* newItem = new DRAWSEGMENT; // ... set up item here // add to commit commit.Add( newItem ); // update the model and add the undo point commit.Push( "Modified one item, added another" ); Without getting too heavily into the details of how the GAL tool framework is implemented under the surface, let's look at how you could add a brand new tool to Pcbnew. Our tool will have the following (rather useless) functions: The first step is to add tool actions. We will implement two actions named: Pcbnew.UselessTool.MoveItemLeft- the interactive tool Pcbnew.UselessTool.FixedCircle- the non-interactive tool. The "unfill tool" already exists with the name pcbnew.EditorControl.zoneUnfillAll. This guide assumes we will be adding a tool to Pcbnew, but the procedure for other GAL-capable canvases will be similar. In pcbnew/tools/pcb_actions.h, we add the following to the PCB_ACTIONS class, which declares our tools: static TOOL_ACTION uselessMoveItemLeft; static TOOL_ACTION uselessFixedCircle; Definitions of actions generally happen in the .cpp of the relevant tool. It doesn't actually matter where the defintion occurs (the declaration is enough to use the action), as long as it's linked in the end. Similar tools should always be defined together. In our case, since we're making a new tool, this will be in pcbnew/tools/useless_tool.cpp. If adding actions to existing tools, the prefix of the tool string (e.g. "Pcbnew.UselessTool") will be a strong indicator as to where to define the tool. The tools definitions look like this: TOOL_ACTION COMMON_ACTIONS::uselessMoveItemLeft( "pcbnew.UselessTool.MoveItemLeft", AS_GLOBAL, MD_CTRL + MD_SHIFT + int( 'L' ), _( "Move item left" ), _( "Select and move item left" ) ); TOOL_ACTION COMMON_ACTIONS::uselessFixedCircle( "pcbnew.UselessTool.FixedCircle", AS_GLOBAL, MD_CTRL + MD_SHIFT + int( 'C' ), _( "Fixed circle" ), _( "Add a fixed size circle in a fixed place" ), add_circle_xpm ); We have defined hotkeys for each action, and they are both global. This means you can use Shift+Ctrl+L and Shift-Ctrl-C to access each tool respectively. We defined an icon for one of the tools, which should appear in any menu the item is added to, along with the given label and explanatory tooltip. We now have two actions defined, but they are not connected to anything. We need to define a functions which implement the right actions. You can add these to an existing tool (for example PCB_EDITOR_CONTROL, which deals with many general PCB modification operation like zone filling), or you can write a whole new tool to keep things separate and give you more scope for adding tool state. We will write our own tool to demonstrate the process. Add a new tool class header pcbnew/tools/useless_tool.h containing the following class: class USELESS_TOOL : public PCB_TOOL { public: USELESS_TOOL(); ~USELESS_TOOL(); ///> React to model/view changes void Reset( RESET_REASON aReason ) override; ///> Basic initalization bool Init() override; ///> Bind handlers to corresponding TOOL_ACTIONs void SetTransitions() override; private: ///> 'Move selected left' interactive tool int moveLeft( const TOOL_EVENT& aEvent ); ///> Internal function to perform the move left action void moveLeftInt(); ///> Add a fixed size circle int fixedCircle( const TOOL_EVENT& aEvent ); ///> Menu model displayed by the tool. TOOL_MENU m_menu; }; In the pcbnew/tools/useless_tool.cpp, implement the required methods. In this file, you might also add free function helpers, other classes, and so on. You will need to add this file to the pcbnew/CMakeLists.txt to build it. Below you will find the contents of useless_tool.cpp: #include "useless_tool.h" #include <class_draw_panel_gal.h> #include <view/view_controls.h> #include <view/view.h> #include <tool/tool_manager.h> #include <board_commit.h> // For frame ToolID values #include <pcbnew_id.h> // For action icons #include <bitmaps.h> // Items tool can act on #include <class_board_item.h> #include <class_drawsegment.h> // Access to other PCB actions and tools #include "pcb_actions.h" #include "selection_tool.h" /* * Tool-specific action defintions */ TOOL_ACTION PCB_ACTIONS::uselessMoveItemLeft( "pcbnew.UselessTool.MoveItemLeft", AS_GLOBAL, MD_CTRL + MD_SHIFT + int( 'L' ), _( "Move item left" ), _( "Select and move item left" ) ); TOOL_ACTION PCB_ACTIONS::uselessFixedCircle( "pcbnew.UselessTool.FixedCircle", AS_GLOBAL, MD_CTRL + MD_SHIFT + int( 'C' ), _( "Fixed circle" ), _( "Add a fixed size circle in a fixed place" ), add_circle_xpm ); /* * USELESS_TOOL implementation */ USELESS_TOOL::USELESS_TOOL() : PCB_TOOL( "pcbnew.UselessTool" ), m_menu( *this ) { } USELESS_TOOL::~USELESS_TOOL() {} void USELESS_TOOL::Reset( RESET_REASON aReason ) { } bool USELESS_TOOL::Init() { auto& menu = m_menu.GetMenu(); // add our own tool's action menu.AddItem( PCB_ACTIONS::uselessFixedCircle ); // add the PCB_EDITOR_CONTROL's zone unfill all action menu.AddItem( PCB_ACTIONS::zoneUnfillAll ); // Add standard zoom and grid tool actions m_menu.AddStandardSubMenus( *getEditFrame<PCB_BASE_FRAME>() ); return true; } void USELESS_TOOL::moveLeftInt() { // we will call actions on the selection tool to get the current // selection. The selection tools will handle item deisambiguation SELECTION_TOOL* selectionTool = m_toolMgr->GetTool<SELECTION_TOOL>(); assert( selectionTool ); // call the actions m_toolMgr->RunAction( PCB_ACTIONS::selectionClear, true ); m_toolMgr->RunAction( PCB_ACTIONS::selectionCursor, true ); selectionTool->SanitizeSelection(); const SELECTION& selection = selectionTool->GetSelection(); // nothing selected, return to event loop if( selection.Empty() ) return; BOARD_COMMIT commit( this ); // iterate BOARD_ITEM* container, moving each item for( auto item : selection ) { commit.Modify( item ); item->Move( wxPoint( -5 * IU_PER_MM, 0 ) ); } // push commit - if selection were empty, this is a no-op commit.Push( "Move left" ); } int USELESS_TOOL::moveLeft( const TOOL_EVENT& aEvent ) { auto& frame = *getEditFrame<PCB_EDIT_FRAME>(); // set tool hint and cursor (actually looks like a crosshair) frame.SetToolID( ID_NO_TOOL_SELECTED, wxCURSOR_PENCIL, _( "Select item to move left" ) ); getViewControls()->ShowCursor( true ); Activate(); // handle tool events for as long as the tool is active while( OPT_TOOL_EVENT evt = Wait() ) { if( evt->IsCancel() || evt->IsActivate() ) { // end of interactive tool break; } else if( evt->IsClick( BUT_RIGHT ) ) { m_menu.ShowContextMenu(); } else if( evt->IsClick( BUT_LEFT ) ) { // invoke the main action logic moveLeftInt(); // keep showing the edit cursor getViewControls()->ShowCursor( true ); } } // reset the PCB frame to how it was we got it frame.SetToolID( ID_NO_TOOL_SELECTED, wxCURSOR_DEFAULT, wxEmptyString ); getViewControls()->ShowCursor( false ); // exit action return 0; } int USELESS_TOOL::fixedCircle( const TOOL_EVENT& aEvent ) { // new circle to add (ideally use a smart pointer) DRAWSEGMENT* circle = new DRAWSEGMENT; // Set the circle attributes circle->SetShape( S_CIRCLE ); circle->SetWidth( 5 * IU_PER_MM ); circle->SetStart( wxPoint( 50 * IU_PER_MM, 50 * IU_PER_MM ) ); circle->SetEnd( wxPoint( 80 * IU_PER_MM, 80 * IU_PER_MM ) ); circle->SetLayer( LAYER_ID::F_SilkS ); // commit the circle to the BOARD BOARD_COMMIT commit( this ); commit.Add( circle ); commit.Push( _( "Draw a circle" ) ); return 0; } void USELESS_TOOL::SetTransitions() { Go( &USELESS_TOOL::fixedCircle, PCB_ACTIONS::uselessFixedCircle.MakeEvent() ); Go( &USELESS_TOOL::moveLeft, PCB_ACTIONS::uselessMoveItemLeft.MakeEvent() ); } The last step is to register the tool in the tool manager. This is done by adding a new instance of the tool to the registerAllTools() function in pcbnew/tools/tools_common.cpp. // add your tool header #include <tools/useless_tool.h> void registerAllTools( TOOL_MANAGER *aToolManager ) { .... aToolManager->RegisterTool( new USELESS_TOOL ); .... } If your new tool applies in the module editor, you also need to do this in FOOTPRINT_EDIT_FRAME::setupTools(). Generally, each kind of EDA_DRAW_FRAME that can use GAL will have a place to do this. When this is all done, you should have modified the following files: pcbnew/tools/common_actions.h- action declarations pcbnew/tools/useless_tool.h- tool header pcbnew/tools/useless_tool.cpp- action definitions and tool implementation pcbnew/tools/tools_common.cpp- registration of the tool pcbnew/CMakeLists.txt- for building the new .cpp files When you run Pcbnew, you should be able to press Shift+Ctrl+L to enter the "move item left" tool - the cursor will change to a crosshair and "Select item to move left" appears in the bottom right corner. When you right-click, you get a menu, which contains an entry for our "create fixed circle" tool and one for the existing "unfill all zones" tool which we added to the menu. You can also use Shift+Ctrl+C to access the fixed circle action. Congratulations, you have just created your first KiCad tool!
http://docs.kicad-pcb.org/doxygen/md_Documentation_development_tool-framework.html
2017-04-23T09:52:47
CC-MAIN-2017-17
1492917118519.29
[]
docs.kicad-pcb.org
With Enlinkd it is possible to get Links based on network routing applications. The following routing daemons can be used to provide a discovery of links based Layer 3 information: This information is provided by SNMP Agents with appropriate MIB support. For this reason it is required to have a working SNMP configuration running. The link data discovered from Enlinkd is provided in the Topology User Interface and on the detail page of a node.
http://docs.opennms.eu/latest/enlinkd/layer-3/introduction.html
2017-04-23T09:57:53
CC-MAIN-2017-17
1492917118519.29
[]
docs.opennms.eu
Current Drawing on Top In Harmony, when you draw on a layer, the artwork is displayed in the correct order. For example, if the layer on which you are drawing is located behind an object on another layer, the lines you are drawing will be hidden behind that object. This lets you display the selected drawing on top of everything while you draw. By enabling this option, each time you select a drawing tool, the selected drawing is displayed in front of everything in the Camera view. The Show Current Drawing on Top status (enabled or disabled) is remembered when you exit Harmony. When you restart the application, the last status will be used. You only need to enable this option once, it is not necessary to do it each time you select a drawing tool.
http://docs.toonboom.com/help/harmony-11/draw-network/Content/_CORE/_Workflow/015_CharacterDesign/Drawing_Tools/019_H2_Show_Current_Drawing_on_Top.html
2017-04-23T09:57:32
CC-MAIN-2017-17
1492917118519.29
[array(['../../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../../Resources/Images/HAR/_Skins/stage.png', None], dtype=object) array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Resources/Images/HAR/_Skins/draw.png', 'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object) array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Resources/Images/HAR/_Skins/sketch.png', None], dtype=object) array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Resources/Images/HAR/_Skins/stageXsheet.png', None], dtype=object) array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Resources/Images/HAR/Stage/Character_Design/HAR11_ShowCurrentDrawingOnTop.png', 'Show Current Drawing on Top Show Current Drawing on Top'], dtype=object) ]
docs.toonboom.com
CSS Selectors Keeping in mind the above constraints, we recommend these tips: - Only put annotations on elements that are available for all the users you are targeting. For instance, if there is a button that only shows up for paid users, flows that point out that button should be targeted only to paid users. - Ensure that elements you are attaching hotspots and tooltips to are consistently visible when the page has loaded. For a deeper dive into the problem and solutions, read our doc on Faulty CSS Selectors.
https://docs.appcues.com/article/232-css-selectors
2019-04-18T14:43:11
CC-MAIN-2019-18
1555578517682.16
[]
docs.appcues.com
How to Delete All Jobs Using the REST API Run the following commands to delete all jobs in a Databricks workspace. Identify the jobs to delete and list them in a text file: curl -X GET -u "Bearer: <token>" https://<databricks-instance>/api/2.0/jobs/list | grep -o -P 'job_id.{0,6}' | awk -F':' '{print $2}' >> job_id.txt Run the `curl`command in a loop to delete the identified jobs: while read line do job_id=$line curl -X POST -u "Bearer: <token>" https://<databricks-instance>/api/2.0/jobs/delete -d '{"job_id": '"$job_id"'}' done < job_id.txt
https://docs.databricks.com/user-guide/faq/howto-jobsdeleteRESTAPI.html
2019-04-18T15:31:37
CC-MAIN-2019-18
1555578517682.16
[]
docs.databricks.com
In today’s competitive marketing world, vendors are in a constant race to be the first to offer innovative new features that can improve performance and attract more customers. The Social media has build it’s own place in business filed like Facebook is most innovative platforms in the new world of social media advertising is Facebook. Please follow the below guide, how to use Facebook in EasySendy Pro, find the steps below, Step 1. Click on the three bar menu tab in the top left. Then, hover your mouse over all tools and finally, click Page Posts. Step2. In the center-top, click the + Create Post button. A floating window will appear, labeled Create Unpublished Page Post. Step 3. This part is important. At this point, I like to scroll down and click on, Use this post for an ad. It will also be published to the Page later. Step 4. Create Your Ad – Here fill in the blanks. the call-to-action button. Just click the drop-down arrow and select which button you want to use. Be sure to scroll down, double checking the Use this post for an ad. It will also be published to the Page later. tab as you go and fill in all the blanks.Once you fill the post text as per your requirement, click the Create Post button at the bottom of the floating page.At drop down menu, you can select any of them. Step 5. Wow, finally you have created your first Facebook post with a CTA button.On the left, select your new post by checking the box. Next, click the drop-down arrow labelled Actions. Now you have a choice to either publish the post now or schedule it for later. Step 6. Once you will click on you page, Here’s what your post should look like on your Facebook page with its Call-to-Action button in the bottom right corner. you have mentioned at Call to action drop down that will appear on you FB page. Step 7. Now go to your EasySendy Pro account and click on “Social” at the right side of corner on dashboard. Click on Facebook Camppaigns >> View All Campaigns. Step 8. At here, Create the New CTA post Campaign and fill in all the blanks and click on Submit Post. Step 9. Once you submit the post that post will share on your Facebook page.
https://docs.easysendy.com/facebook-campaigns/kb/create-facebook-page-status-cta-post-campaign/
2019-04-18T15:23:16
CC-MAIN-2019-18
1555578517682.16
[array(['https://easysendy.com/wp-content/uploads/2018/05/report-easysendy-campiagn-fb.png', None], dtype=object) ]
docs.easysendy.com
Master channels¶ Master channels are the interface that directly or indirectly interact with the user. Despite the first master channel of EFB (EFB Telegram Master) is written in a form of Telegram Bot, master channels can be written in many forms, such as: - A web app - A server that expose APIs to dedicated desktop and mobile clients - A chat bot on an existing IM - A server that compiles with a generic IM Protocol - A CLI client - Anything else you can think of… Design guideline¶ When the master channel is implemented on an existing protocol or platform, as far as possible, while considering the user experience, a master channel should: - maintain one conversation thread per chat, indicating its name, source channel and type; - support all, if not most, types of messages defined in the framework, process and deliver messages between the user and slave channels; - support all, if not most, features of messages, including: targeted message reply, chat substitution in text (usually used in @ references), commands, etc. Master channel should be able to process incoming messages with such features, and send messages with such features to slave channels if applicable; - be able to invoke and process “additional features” offered by slave channels. Optionally, a master channel can also support / identify vendor-specified information from certain slave channels. An example of an ideal design of a master channel, inspired by Telegram Desktop Depends on your implementation, a master channel may probably needs to maintain a list of chats and messages, for presentation or other purposes. Message delivery¶ Note that sometimes users may send messages outside of this EFB session, so that slave channels might provide a message has its author marked as “self”.
https://ehforwarderbot.readthedocs.io/en/latest/guide/master.html
2019-04-18T15:35:34
CC-MAIN-2019-18
1555578517682.16
[array(['../_images/master-channel-0.png', '../_images/master-channel-0.png'], dtype=object)]
ehforwarderbot.readthedocs.io
Asana Integration uses OAUTH to connect with your Asana account. Click the Connect to Asana button: You will then be asked to allow Saber to connect to your Asana account: Click the blue Allow button. You’ll now be able to set up your Asana integration: Task Name Required This allows you to define the naming format of the Asana tasks Asana account. Clicking test settings does not save the integration, you will still need to click the Save once you are happy with settings.
https://docs.bugmuncher.com/integrations/asana/
2019-04-18T15:19:02
CC-MAIN-2019-18
1555578517682.16
[]
docs.bugmuncher.com
Blueprints Displays all the blueprints on the tenant, according to the user’s permissions and the blueprints visibility levels. The following information is displayed: - Icon image file - Name - Visibility level - Creation time - Last update time - Creator user-name - Main blueprint file name (as the blueprint archive can contain multiple files) - Number of deployments derived from the blueprint Widget Settings Refresh time interval- The time interval in which the widget’s data will be refreshed, in seconds. Default: 10 seconds Enable click to drill down- This option enables redirecting to the blueprint’s drill-down page upon clicking on a specific blueprint. Default: True Display style- Can be either Catalog or table. The deployments status column is only available in list mode. Default: table
https://docs.cloudify.co/4.5.5/working_with/console/widgets/blueprints/
2019-04-18T14:47:58
CC-MAIN-2019-18
1555578517682.16
[array(['../../../../images/ui/widgets/blueprints-list.png', 'blueprints-list'], dtype=object) ]
docs.cloudify.co
Blocking SQL Blocking SQL Source Databases Source For more information, see the Fragmentation Manager topic. Deadlocks Source Maintenance Plan Source Reporting Services Report Reporting Services Report Source SQL Server Agent Alerts Source SQL Server Agent Job SQL Sever Agent Jobs Source SQL Server Agent Log Source SQL Server Instance Top SQL Top SQL Source Note: Top SQL Source: There is an And relationship that exists between the Minimum Duration, Minimum CPU, Minimum Reads, and the Minimum Writes collection settings. This means, that to be collected as Top SQL, the event needs to satisfy each individual collection setting. For example, if you set the Minimum Duration at 10 seconds and the Minimum Reads at 25, an event needs to meet both a Minimum Duration of 10 seconds And a Minimum Read of 25 to be captured in Top SQL. Minimum Duration can't be set below 100ms unless Minimum CPU, Minimum Reads, or Minimum Writes is greater than zero. This lower limit is enforced because setting this thresholds below 100ms for an extended period of time could dramatically increase the volume of data collected and stored by SentryOne, and have a negative impact on the monitored server. SentryOne's Quick Trace functionality is better suited to analyze extremely short duration events.
https://docs.sentryone.com/help/sql-server-settings
2019-04-18T15:25:24
CC-MAIN-2019-18
1555578517682.16
[]
docs.sentryone.com
EFBChannel¶ - class ehforwarderbot. EFBChannel(instance_id: str = None)[source]¶ The abstract channel class. channel_emoji¶ Emoji icon of the channel. Recommended to use a visually-length-one emoji that represents the channel best. channel_id¶ Unique identifier of the channel. Convention of IDs is specified in Packaging and Publish. This ID will be appended with its instance ID when available. __init__(instance_id: str = None)[source]¶ Initialize the channel. Inherited initializer must call the “super init” method at the beginning. get_chat(self, chat_uid: str, member_uid: Optional[str] = None) → EFBChat[source]¶ Get the chat object from a slave channel. Note This is not required by Master Channels get_chat_picture(chat: EFBChat) → IO[bytes][source]¶ Get the profile picture of a chat. Profile picture is also referred as profile photo, avatar, “head image” sometimes. Examples if chat.channel_uid != self.channel_uid: raise EFBChannelNotFound() file = tempfile.NamedTemporaryFile(suffix=".png") response = requests.post("", data={"uid": chat.chat_uid}) if response.status_code == 404: raise EFBChatNotFound() file.write(response.content) file.seek(0) return file Note This is not required by Master Channels get_chats() → List[EFBChat][source]¶ Return a list of available chats in the channel. Note This is not required by Master Channels get_message_by_id(msg_id: str) → Optional[EFBMsg][source]¶ Get message entity by its ID. Applicable to both master channels and slave channels. Return Nonewhen message not found. Override this method and raise EFBOperationNotSupportedif it is not feasible to perform this for your platform. poll()[source]¶ Method to poll for messages. This method is called when the framework is initialized. This method should be blocking. send_message(msg: EFBMsg) → EFBMsg[source]¶ Send a message to, or edit a sent message in the channel. send_status(status: EFBStatus)[source]¶ Send a status to the channel. Note This is not applicable to Slave Channels stop_polling()[source]¶ When EFB framework is asked to stop gracefully, this method is called to each channel object to stop all processes in the channel, save all status if necessary, and terminate polling. When the channel is ready to stop, the polling function must stop blocking. EFB framework will quit completely when all polling threads end. Common operations¶ Sending messages and statuses¶ Sending messages and statuses to other channels is the most common operation of a channel. When the channel has gathered enough information from external sources, it should be further processed and packed into the relative objects, i.e. EFBMsg and EFBStatus. When the related information is packed into their relative objects, it can be sent to the coordinator for the next step. For now, both EFBMsg and EFBStatus has an attribute that indicates that where this object should be delivered to ( EFBMsg.deliver_to and EFBStatus.destination_channel). This is used by the coordinator when delivering the message. For messages, it can be delivered with coordinator.send_message(), and statuses can be delivered with coordinator.send_status(). When the object is passed onto the coordinator, it will be further processed by the middleware.
https://ehforwarderbot.readthedocs.io/en/latest/API/channel.html
2019-04-18T15:33:44
CC-MAIN-2019-18
1555578517682.16
[]
ehforwarderbot.readthedocs.io
Setting up the backend Every app that is built with ApiOmat Studio requires an app backend. The app backend is where data is stored and additional business logic can be written. For our example app, we will create an ApiOmat module to store our information. In our module, we will add the classes "Machine", "AssemblyLine" and "Summary". Creating a new backend When you log into ApiOmat, you have to create a new backend. If your account does not have a backend you will be prompted to create a backup upon logging in. If you already have a backend, you can create a new backend by clicking on "My App-Backends" in the top left corner of the dashboard. In both cases, you have to give your new app-backend a name and optionally a description. Make sure the "ACTIVE" slider is on and click on the red plus button. Creating a new module In ApiOmat, we have to create modules that contain the meta-data to store our information. These modules can be reused in multiple applications and also give developers the opportunity to write business logic in Java. To create a new module, click on "New Module". Give the module a name. In this tutorial we will call ours MMWApp. Here you can write a description, upload an icon and publish it to make it accessible to other ApiOmat users. Defining the classes Now we have to define our meta-models, and we do so in the "Class Editor" tab. Once you're at the class editor, click on "New Class". In the first field, we have to decide which module we want to add our class to. Select MMWApp and then set the class name to Machine. Repeat the process for AssemblyLine and Summary. Adding attributes to a class Now that we have our classes, we have to give them attributes so that we can save information about the machines, assembly lines and the summaries. We'll start by navigating to the Machine class by clicking on Machine on the left-hand menu under MMWApp. Scroll down to the Attributes section and type machineId into the new attribute field. Next we have to define the type of data that this attribute will be. Since the machineIds will contain text including the name of the machine, we will select STRING. Once we've given the attribute a name and selected the data type, click on the red plus button to add it. For AssemblyLine, add: name - with the data type string picture - with the data type image For Summary, add: machineID - with the data type string assemblyId - with the data type string amount - with the data type string image - with the data type image Important: ApiOmat Studio currently only supports the following data types: Long, Double, Strings, Date and Images. Collections are also limited in their use. References are also currently unavailable. Compiling our module Whenever we make changes to our module or the classes within a module, we have to recompile our modules to implement the changes. We do so by clicking on the notifications bell at the top of the screen and then on the compile button. ApiOmat will then recompile the module and our changes are now implemented. Next Step That's it, we've set up our backend. If you haven't already installed ApiOmat Studio, follow the instructions here: Installing Apiomat Studio If you've already installed ApiOmat Studio, you can go straight to Building the Start Screen.
http://docs.apiomat.com/32/Setting-up-the-backend.html
2019-04-18T14:52:20
CC-MAIN-2019-18
1555578517682.16
[array(['images/download/attachments/27724764/1-opt-creating-the-backend.gif', 'images/download/attachments/27724764/1-opt-creating-the-backend.gif'], dtype=object) array(['images/download/attachments/27724764/2-opt-new-module.gif', 'images/download/attachments/27724764/2-opt-new-module.gif'], dtype=object) array(['images/download/attachments/27724764/4-opt-adding-attribute.gif', 'images/download/attachments/27724764/4-opt-adding-attribute.gif'], dtype=object) array(['images/download/attachments/27724764/5-opt-select-data-type.gif', 'images/download/attachments/27724764/5-opt-select-data-type.gif'], dtype=object) array(['images/download/attachments/27724764/6-opt-compile-modules.gif', 'images/download/attachments/27724764/6-opt-compile-modules.gif'], dtype=object) ]
docs.apiomat.com
Using the call detail record (CDRCall Detail Record includes call details such as point of origin,end point, call direction, call duration, and more.),. NOTE:.
https://docs.8x8.com/VirtualOfficeAnalytics/Content/VOA/CallDetailRecord.htm
2019-04-18T14:57:55
CC-MAIN-2019-18
1555578517682.16
[]
docs.8x8.com
Document Type Article Abstract After a workshop on student outcomes for the first-year writing course, the 28 faculty participants discussed the implications of “Development” for critical thinking. This case study of one college’s participatory exercise in improving writing found that although the RWU faculty lacked consensus on the definition, simply discussing topic of “Development” may have had the unintended effect of fewer A grades in the following semester. Unfortunately, the percentage of A grades ascended in the subsequent semesters to suggest that without reinforcement, faculty returned to grade inflation. Recommended Citation Andrade, Glenna M. 2009. "Grading Changes after a Writing Faculty Workshop." Academic Exchange Quarterly 12 (1) Published in: Academic Exchange Quarterly, Volume 12, Issue 1, Spring 2009.
https://docs.rwu.edu/fcas_fp/18/
2019-04-18T14:25:02
CC-MAIN-2019-18
1555578517682.16
[]
docs.rwu.edu
Timeline¶ This document is a summary of the DebOps development over time. You can see most of the project's history in git logs, however tracing it might be confusing due to the split and subsequent merge of the code back together. Here, we try to explain why that happended. Summary of the events¶ The project has been initiated by Maciej Delmanowski in October 2013. In September 2014, after two project name changes, the code contained in one git repository was moved into multiple git repositories published in the debops organization on GitHub to allow publication of the roles in the Ansible Galaxy, as well as better usage of Travis CI to test the codebase. The decision to move the project coodebase to the separate git repositories shaped the DebOps project in multiple ways. It enforced the code separation between different Ansible roles that required development of proper ways to make them interact with each other and pass the data around. New open source projects, ansigenome and rolespec, were created to aid the DebOps development and maintenance. Unfortunately, the growing codebase resulted in quickly rising number of git repositories to maintain, which sapped the available resources from project development. There were also issues with packaging the DebOps code and documentation in Debian, as well as no practical way to provide a "stable release" due to the separate git repositories being independently tagged and developed. Because of that, in August 2017 the project maintainers decided to merge all of the git repositories back into one monorepo to make the DebOps development easier. The process was completed over a period of a few months. As the result, the development model also changed into a more distributed way with multiple forks of the main repository. At present, the DebOps codebase is being prepared for its first official stable release. 2013¶ May 2013¶ - Debian 7.0 (wheezy) becomes a Debian Stable release. It was the first Debian release supported by DebOps. September 2013¶ - Ansible 1.3 ("Top of the World") is released. This version introduced the role default variables, local facts and role dependencies, which became an integral part of DebOps later on. October 2013¶ - Initial commit in the ansible-aiuagit repository which will eventually become DebOps. - Introduction of randomly generated passwords with the MySQL role. This feature will eventually evolve into debops.secret role and will be used almost everywhere in DebOps. December 2013¶ - The ansible-aiuaproject is renamed to ginas. ginas is not a server. - Support for ownCloud deployment is introduced. The role is used as a test case for PHP5 support in the project, and eventually will become one of the end-user applications provided in DebOps. 2014¶ February 2014¶ - Project gains support for Vagrant virtual machines, used for demonstration purposes. - Travis CI tests are introduced to find any issues with pull requests before merging them. The project gets its own GitHub organization, and new development model using forked repositories is introduced. - Introduction of Sphinx-based documentation. March 2014¶ - Support for GitLab CE deployment is introduced. The gitlab role will be used to test Ruby support and as an integration test for other DebOps roles, as well as to provide a git server for the IT infrastructures managed by DebOps. July 2014¶ - Introduction of Nick Janetakis as a first major contributor to the project, with first draft of the Getting Started guide. - Nick Janetakis creates ansigenome project which is meant to ease management of multiple Ansible roles. August 2014¶ - The ginasproject is renamed to DebOps project. The debops.orgDNS domain is registered, project gets its own website, mailing list and GitHub organization. September 2014¶ - The last commit in the old DebOps repository. The development if this repository has been frozen since. It is now included in the DebOps monorepo as a separate ginas-historicalbranch. - Nick Janetakis creates rolespec project which provides a unified test environment for separate DebOps roles based on Travis CI. - First version of the DebOps install scripts written in Bash, located in the debops-toolsrepository. They will be used to download all other DebOps repositories with playbooks and roles. November 2014¶ - Maciej Delmanowski writes the ipaddr() Ansible filter plugin for usage with debops.ifupdown role and others that require IP address manipulation. The plugin is later merged into Ansible Core. December 2014¶ - Hartmut Goebel rewrites the Bash DebOps scripts in Python. They will be later published on PyPI which will become main installation method. - debops-tools v0.1.0 is released. This repository contains various scripts that can be used to install or update DebOps roles and playbooks git repositories, create project directories, and run the playbooks. 2015¶ February 2015¶ - debops-playbooks v0.1.0 is released. This repository holds the DebOps playbooks that tie all of the roles together, and was treated as the "main" repository of the project when it was split into multiple git repositories. March 2015¶ - Robert Chady introduces custom Ansible lookup plugins to the project, file_src, template_srcand later task_src, which allow usage of custom files and templates inside roles without modifications, as well as injection of custom Ansible tasks in the roles. April 2015¶ - Debian 8.0 (jessie) becomes a Debian Stable release. June 2015¶ - Introduction of MariaDB server and client roles to the project. They were used to test and develop split client/server role model with support for database server on remote hosts, later adopted in other DebOps roles. September 2015¶ - After discussion in the community role dependency model in DebOps is redesigned. Most of the role dependencies will be moved from the role meta/main.ymlconfiguration to the playbook level to allow easy use of various DebOps roles independently from each other. October 2015¶ - The debops-contrib GitHub organization is created to host third-party DebOps git repositories and serve as a staging point for including new Ansible role repositories in DebOps. 2016¶ January 2016¶ - Ansible 2.0 ("Over the Hills and Far Away") is released. March 2016¶ - The DebOps mailing list is moved to a self-hosted Mailman installation based on DebOps, to ensure that the project is "eating its own dog food". April 2016¶ - Daniel Sender creates the first iteration of the debops Debian package. Unfortunately, problems with debops-doc package prevent full inclusion of the project in Debian. July 2016¶ - Robin Schneider creates DebOps entry in the Core Infrastructure Initiative Best Practices program. 2017¶ June 2017¶ - Debian 9.0 (stretch) becomes a Debian Stable release. August 2017¶ - Maciej Delmanowski proposes merge of all of the project repositories back together into one DebOps monorepo. The plan is to resolve all pending pull requests in various repositories before merging starts. September 2017¶ - debops-tools v0.5.0 was the last tagged release of the DebOps scripts before the repository was merged into the new DebOps monorepo. October 2017¶ - The last commit in the debops-playbooksgit repository. Later on the repository will be merged into the new DebOps monorepo. - All of the pending pull requests in DebOps roles are resolved and the code from separate git repositories is merged into single monorepo, which becomes the main development repository. - debops v0.6.0 is released, along with updated scripts that support installation of the monorepo by the debops-update command. The release is fully compatible with older DebOps roles and playbooks. From this point on the old and new codebases start to diverge. - ypid roles from 'debops-contrib' organization are merged to the DebOps monorepo without further changes; they will be integrated with the main playbook later on. November 2017¶ - Sphinx-based documentation is reinitialized in the monorepo. Previous iteration based on a central git repository and git submodules is deemed unsuitable, however current project documentation published on ReadTheDocs is kept in place, waiting before role documentation is fully migrated. - New Travis CI test suite is introduced that focuses on syntax, testing Python scripts, YAML documents, project documentation and git repository integrity. DebOps roles are not tested directly on Travis anymore. - Support for Docker containers is introduced in the monorepo, along with an official 'debops/debops' Docker image which is automatically rebuilt and published on any changes in the repository. December 2017¶ - New test suite based on GitLab CI is introduced which allows testing of the DebOps roles using Vagrant, LXC and KVM/libvirt stack. 2018¶ January 2018¶ - DebOps role documentation is moved to the 'docs/' directory and the project documentation published on ReadTheDocs is switched to the DebOps monorepo version. May 2018¶ - End of Debian Wheezy LTS support. 2020¶ April 2020¶ - End of Debian Jessie LTS support.
https://docs.debops.org/en/v0.8.1/introduction/timeline.html
2019-04-18T14:49:46
CC-MAIN-2019-18
1555578517682.16
[]
docs.debops.org
How do I set up an email signature? You can set a signature on each mail account or alias. This signature will be added to each recipient at the bottom of your email. 1) Click on "Mail accounts" in the sidebar: 2) Find the mail account or alias and click the "Edit" button: 3) Edit the space beneath the name field to set up your signature. Use our email editor to customize your signature. Be careful about pasting in signatures from email clients because embedded font sizes can sometimes cause your signature to have a different font from your main email. You can insert images into your signature, but we don't recommend it for cold emailing because it can negatively affect your spam score. When you send a test email in our message editors, this signature will automatically be applied, so this is a good way to test that it looks correct:
https://docs.mailshake.com/article/87-how-do-i-set-up-an-email-signature
2019-04-18T14:54:46
CC-MAIN-2019-18
1555578517682.16
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56f5e15f9033601eb6736648/images/583d910ec6979106d3737d44/file-i16DJaMH43.png', None], dtype=object) ]
docs.mailshake.com
High Availability Displays the Manager’s High-Availability status. If a cluster architecture is configured on the manager, this widget will show the cluster-connected nodes. There is no click-through actions available from this widget, as all Cluster management actions should be performed from the Cloudify CLI / REST API. Widget Settings Refresh time interval- The time interval in which the widget’s data will be refreshed, in seconds. Default: 30 seconds
https://docs.cloudify.co/4.5.5/working_with/console/widgets/highavailability/
2019-04-18T14:51:17
CC-MAIN-2019-18
1555578517682.16
[array(['../../../../images/ui/widgets/list-nodes-in-cluster-2.png', 'list-nodes-in-cluster-2'], dtype=object) ]
docs.cloudify.co
If you are a BigCommerce customer there are many benefits of integrating with PayWhirl including: recurring payments & billing, stored customer credit cards, layaway plans, pre-orders, custom invoicing, subscriptions, membership content and many other uses cases. BigCommerce doesn't support subscriptions or stored credit cards in their native cart or checkout process but you can use PayWhirl to add this functionality to your online store. When a customer completes a purchase through a PayWhirl widget, their address and credit card info will be stored in the gateway you connect so both the customer and/or you, the admin, can use it again if needed. Customer's can also return and login to their "customer portal" to manage their information, cards on file, etc. (this can be disabled if you want). The customer portal is typically embedded directly into the existing BigCommerce "My Account" section of your website so customers are unaware there are two different systems. Or you can use the hosted version of your customer portal, if you don't want to integrate the two systems. Either way, customers can also use the same login credentials on both PayWhirl and BigCommerce. Once customers login, they have a place to purchase subscriptions and they can purchase additional products in PayWhirl with their saved info, all without ever leaving your website. If you already have products in BigCommerce you can continue using the existing cart checkout. Note: Our cart system is separate from the one built into BigCommerce. We cannot import items or complete a checkout that begins in the BigCommerce cart... In a nutshell the process works like this: - Connect to BigCommerce / install app - Create payment plan(s) - Create payment widget(s) - Create BigCommerce page(s) for each widget (or product pages) - Embed your widget(s) into BigCommerce page(s) by copying / pasting embed codes into your pages / product pages. - Connect and Activate Live Payment Gateway. Note: By default every widget is connected to the Test Gateway and doesn't store or charge credit cards. To begin accepting live payments make sure to connect a gateway to each widget that is embedded in your site. Also make sure to change the customer portal settings page. - Install the app - Once you have the PayWhirl App installed you can now create payment plans for use with BigCommerce. You will see new options on the plan settings page to help you control how orders flow into BigCommerce after successful payments. Orders are be generated based on your plan settings. Note: You will need to create a plan or upsell for every product/service you want to sell through PayWhirl. - Setup payment widget(s) and/or payment forms - After you have connected PayWhirl to your BigCommerce store and created your payment plan(s) you will need to create payment widgets or forms so your customers can checkout securely on your site. When you setup your payment widgets) you will choose which plans to display in each widget. After you save your widget(s) you will receive an embed code (a few lines of code) that you can then to copy / paste into BigCommerce. Note: We offer two different types of embed codes per widget. Embeded widgets (customer checkout in the page) and "buy button" widgets (that popup checkout over the page for customers) - Create page(s) / product page(s) in BigCommerce for your widgets - Once you have your embed codes for your widgets or buy buttons you can proceed with pasting the code into the HTML of any page or product page within BigCommerce. - Use the HTML input method on the WYZIWYG editor by clicking the icon that says HTML in the graphical page editor. - Paste the Widget Code into the HTML Source Editor 4. When you are ready to go live, edit the widget and select the live gateway. Each widget type is a bit different, but towards the bottom you can click on the advanced settings. Select the live gateway from the drop down. If you haven't connected a gateway, checkout one of these guides. Stripe is available on all plans, while Braintree and Authorize.net are available on our monthly paid plans). Each businesses setup is unique but typically BigCommerce customers will setup ONE PAGE in BigCommerce and ONE WIDGET for EACH PRODUCT they want to offer a payment plans on... Then, they will setup however many PLANS they want to offer for each product and build a WIDGET that has all of the PLANS for the specific product. Finally, they will embed the WIDGET into the PAGE they created originally in BigCommerce. Related Articles: How to integrate BigCommerce with PayWhirl How to control orders placed in BigCommerce How to setup membership discounts with BigCommerce Groups Please let us know if you have any questions. Best, Team PayWhirl
https://docs.paywhirl.com/PayWhirl/apps-and-integrations/bigcommerce/bigcommerce-setup-checklist
2019-04-18T14:38:52
CC-MAIN-2019-18
1555578517682.16
[array(['https://uploads.intercomcdn.com/i/o/12965493/2779709761ee78a727f9aa14/Html+bigcom+edit.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12965526/af5415e6e8e4b117d314ff37/bigcomm+source.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/24311434/ff76fc6ae9d5f11775503087/gatewayselectors.png', None], dtype=object) ]
docs.paywhirl.com
Bill Everyone is a non-automatic form of billing that allows you to bill all your customers at once. For Example: If you owned a winery, and your wine shipments were unpredictable your customers can still subscribe to a plans and essentially stay on a list. A common example of this would be to create a $0/year plan with a possible setup fee if you'd like to charge something immediately. Then whenever, you are ready to bill everyone on the list you would use the bill everyone feature to do so. You can also use this feature to take pre-orders on PayWhirl. You would create a plan like the one described above for $0/year to collect a credit card and subscriber the customer to a "pre-order plan" or list. Then, when you are ready to ship the products you can run the charge to everyone on the list(s) using the bill everyone feature. To bill your customers manually using plans as lists go to the dashboard and click the Bill Everyone icon Next, you'll choose the group of customers you would like to charge by selecting a plan to create your billing list. Finally, you'll enter the charge information and run a charge to everyone on the list/plan at once.
https://docs.paywhirl.com/PayWhirl/classic-system-support-v1/getting-started-and-setup/v1-bill-everyone-group-billing-pre-orders-more
2019-04-18T15:15:49
CC-MAIN-2019-18
1555578517682.16
[array(['https://uploads.intercomcdn.com/i/o/19830009/9dfa10a334b29160137c6127/note.png', None], dtype=object) ]
docs.paywhirl.com
Title The Jackprot Simulation Couples Mutation Rate with Natural Selection to Illustrate How Protein Evolution Is Not Random Document Type Article Abstract Protein evolution is not a random process. Views which attribute randomness to molecular change, deleterious nature to single-gene mutations, insufficient geological time, or population size for molecular improvements to occur, or invoke “design creationism” to account for complexity in molecular structures and biological processes, are unfounded. Scientific evidence suggests that natural selection tinkers with molecular improvements by retaining adaptive peptide sequence. We used slot-machine probabilities and ion channels to show biological directionality on molecular change. Because ion channels reside in the lipid bilayer of cell membranes, their residue location must be in balance with the membrane’s hydrophobic/philic nature; a selective “pore” for ion passage is located within the hydrophobic region. We contrasted the random generation of DNA sequence for KcsA, a bacterial two-transmembrane-domain (2TM) potassium channel, from Streptomyces lividans, with an under-selection scenario, the “jackprot,” which predicted much faster evolution than by chance. We wrote a computer program in JAVA APPLET version 1.0 and designed an online interface, The Jackprot Simulation, to model a numerical interaction between mutation rate and natural selection during a scenario of polypeptide evolution. Winning the “jackprot,” or highest-fitness complete-peptide sequence, required cumulative smaller “wins” (rewarded by selection) at the first, second, and third positions in each of the 161 KcsA codons (“jackdons” that led to “jackacids” that led to the “jackprot”). The “jackprot” is a didactic tool to demonstrate how mutation rate coupled with natural selection suffices to explain the evolution of specialized proteins, such as the complex six-transmembrane (6TM) domain potassium, sodium, or calcium channels. Ancestral DNA sequences coding for 2TM-like proteins underwent nucleotide “edition” and gene duplications to generate the 6TMs. Ion channels are essential to the physiology of neurons, ganglia, and brains, and were crucial to the evolutionary advent of consciousness. The Jackprot Simulation illustrates in a computer model that evolution is not and cannot be a random process as conceived by design creationists. Recommended Citation Paz-y-Miño-C, Guillermo and Avelina Espinosa, and Chunyan Y. Bai. 2011. "The Jackprot Simulation Couples Mutation Rate with Natural Selection to Illustrate How Protein Evolution Is Not Random." Evolution: Education and Outreach 4, (3): 502-514. Published in: Evolution: Education and Outreach, Volume 4, Issue 3
https://docs.rwu.edu/fcas_fp/179/
2019-04-18T14:33:20
CC-MAIN-2019-18
1555578517682.16
[]
docs.rwu.edu
CodeRed CMS 0.9.0 release notes¶ New Features¶ NEW Store Locator feature powered by Google Maps. See Store Locator. NEW import export functionality. Import or export existing pages as JSON. Import new pages from CSV files. See Import/Export. Replaced Google Analytics with Google Tag Manager. Added additional blocks to WebPage HTML template to ease template extending.
https://docs.coderedcorp.com/cms/stable/releases/v0.9.0.html
2019-04-18T14:35:03
CC-MAIN-2019-18
1555578517682.16
[]
docs.coderedcorp.com
ReferralCandy is a popular app that allows you to track and reward customers who refer their friends to your business. To create your own affiliate rewards program using ReferralCandy and PayWhirl please follow the tutorial below. - Create a ReferralCandy account at and locate your account "App ID" and "Secret Key" on your admin settings page. 2) Next, Install the integration on PayWhirl. Click the green "Install App" button under Referral Candy on the integrations page: In the ReferralCandy integration settings paste in your App ID & Secret Key: 3) Setup your ReferralCandy rewards and account settings. In this example we're going to setup ReferralCandy with PayWhirl so customers can earn 50% off their next payment by referring a friends. Then we will setup a "Friend Offer" in ReferralCandy for 20% off to help entice new customers or "friends" of existing customers. To setup your Referral Reward click "Referral Reward" in the main menu of Referral Candy. Next, click "Edit" in the Referral Rewards block: For this example we'll choose "Coupon" as the type of reward and describe our offer for "advocates" in the referral reward settings. Next, we will setup the referral reward conditions in ReferralCandy. These are the settings that control the "rules" of your referral reward for advocates. In this example, we will be rewarding based on the following conditions: rewards will be delivered in USD, the referred friend must spend at least $5.00, the reward email will be sent 6 hours after the friends purchase, advocates will be rewarded only on the friend's first purchase and advocates can receive one reward only for each friend they refer. NOTE: You can configure these settings however you'd like. This is just one example of how you can reward customers for referrals. Now we need to setup a promo code for our Referral Reward in PayWhirl. Click "Promo Codes" in the main menu of PayWhirl: Next, click the green "New Promo" button in the top right corner of PayWhirl to create your new promo code for the referral reward. We're going to make our advocate reward a 50% off promo code ("50OFF") that can be applied to new or existing subscriptions on PayWhirl. Later on in this tutorial we will need to create our "friend offer" promo code so let's do that now while we're here in PayWhirl's manage promo code page. The friend offer will be given to "friends" of advocates as an incentive to sign up. In our example, we will make the friend offer a 20% off promo code ("WELCOME22") to help entice people to purchase.NOTE: You can change these settings to meet your own business needs. For example, you don't have to have a friend offer (they are optional technically). Now that we have our two different promo codes created (referral reward & friend offer) we can finish our setup in ReferralCandy. Let's continue by adding our referral reward promo code to our referral candy account settings under Referral Reward. On the Referral Reward page click "Edit" to add your promo code for your advocates: Paste in your "50OFF" promo code from PayWhirl as your referral reward for advocates. In this example we only created one referral reward promo code but they suggest at least 100 if they are one-time use codes. Next, we will setup our Friend Offer settings in referral candy. Start by clicking the "Friend Offer" menu item in ReferralCandy: Then click "Edit" in the "Friend Offer" block and configure your settings as follows: NOTE: You should change these settings to meet your business needs... This is just an example! Now we need to add our "WELCOME22" friend offer promo code to the Friend Offer settings. Click "Edit" next to "Manage your Friend Offer coupon code. Please customize your own settings as needed. Finally, we just need to activate our campaign on the Referral Candy status page: NOTE FOR SHOPIFY USERS WHO WANT TO USE REFERRAL CANDY WITH PAYWHIRL: You will need to contact ReferralCandy after you have installed the ReferralCandy app. This step will allow you to manually set coupon codes in ReferralCandy so you can follow the integration steps above. SHOPIFY REFERRAL CANDY CUSTOMERS... Referral candy has added an option for retailers to select the "PayWhirl" Integration as part of the set up process (see below screenshot). This is the first step that retailers have to do in order to configure the needed rewards properly. This option can be found in the ReferralCandy dashboard at (you can log into the ReferralCandy dashboard to see this in action). That's it!... Once you activate your ReferralCandy campaign your referral program will be live and ready to track customer referrals. As customers use their referral links you will begin to see events show up in ReferralCandy's activity monitors. Referral candy will email out the rewards (in the form of promo codes in this example) automatically based on your settings and customers can then apply their reward codes or friend offer codes to new / existing subscriptions on PayWhirl. NOTE: You might have to enable the ability for customers to apply promo code rewards to EXISTING invoices or subscriptions in your PayWhirl account, "My Account" settings under advanced settings. If you have any questions, please let us know. Team Paywhirl
https://docs.paywhirl.com/PayWhirl/apps-and-integrations/other-apps-and-integrations/how-to-integrate-referralcandy-with-paywhirl
2019-04-18T14:40:57
CC-MAIN-2019-18
1555578517682.16
[array(['https://uploads.intercomcdn.com/i/o/12654983/5947f120ad173119eaa39bb3/rc-admin-settings.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655013/9dbc2752aceab194620f953c/integrations-menu.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655031/2fca8d3beec5b00d3300826c/referralcandy-install.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655052/a73537864b1c823dec82bb15/pw-integrations-settings.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655285/1667613da11923d319ca4389/rc-referral-reward-menu.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655307/44146f59cd1ec1df86f4f20e/rc-referral-reward-edit.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655319/c3f4b5b95e295568f486c229/rc-referral-reward-settings.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655341/9885a0330cf67b9f1ae86569/reward-conditions.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655366/bcea630bc180b2a19e3348f6/promo-codes-menu.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655384/170e4ec72b718e88fef8a330/referral-promo-code.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655427/27afa86fc387a15d2f8fe4c4/friend-promo-code.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655454/320d34b398b19720ca1e53b3/edit-advocate-reward.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655464/a9bb07b1b9acd321330c33c7/coop-code-add-1.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655481/802c693fd84969d30abfdc5b/edit-friend-offer0.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655505/d984e10aa317eba76b684e16/edit-friend-offer.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655521/e4b4e82c170a17ea2f081ff6/edit-friend-offer2-1.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655591/726c6a74d5260bfbb0c5c1c4/edit-friend-offer2.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655598/92480c13423f986534a9f935/edit-friend-offer3.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12655615/bdaa0d68cc687a1a3d9886f5/camp-status.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/33583228/e567377505ec05397168f9d8/Screen+Shot+2017-09-11+at+4.01.12+PM.png', None], dtype=object) ]
docs.paywhirl.com
module RSS::RSS09 RSS 0.9 support¶ ↑ RSS has three different versions. This module contains support for version 0.9.1. Producing RSS 0. Constants - ELEMENTS - - NSPOOL - Public Class Methods append_features(klass) click to toggle source Calls superclass method # File lib/rss/0.9.rb, line 43 def self.append_features(klass) super klass.install_must_call_validator('', "") end
https://docs.ruby-lang.org/en/2.3.0/RSS/RSS09.html
2019-04-18T14:21:59
CC-MAIN-2019-18
1555578517682.16
[]
docs.ruby-lang.org
slideWindowTunePlot - Visualize parameter tuning for sliding window approach Description¶ Visualize results from slideWindowTune Usage¶ slideWindowTunePlot(tuneList, plotFiltered = TRUE, percentage = FALSE, jitter.x = FALSE, jitter.x.amt = 0.1, jitter.y = FALSE, jitter.y.amt = 0.1, pchs = 1, ltys = 2, cols = 1, plotLegend = TRUE, legendPos = "topright", legendHoriz = FALSE, legendCex = 1, title = NULL) Arguments¶ - tuneList - a list of logical matrices returned by slideWindowTune. - plotFiltered - whether to plot the number of filtered sequences (as opposed to the number of remaining sequences). Default is TRUE. - percentage - whether to plot on the y-axis the percentage of filtered sequences (as opposed to the absolute number). Default is FALSE. - jitter.x - whether to jitter x-axis values. Default is FALSE. - jitter.x.amt - amount of jittering to be applied on x-axis values if jitter.x=TRUE. Default is 0.1. - jitter.y - whether to jitter y-axis values. Default is FALSE. - jitter.y.amt - amount of jittering to be applied on y-axis values if jitter.y=TRUE. Default is 0.1. - pchs - point types to pass on to plot. - ltys - line types to pass on to plot. - cols - colors to pass on to plot. - plotLegend - whether to plot legend. Default is TRUE. - legendPos - position of legend to pass on to legend. Can be either a numeric vector specifying x-y coordinates, or one of "topright", "center", etc. Default is "topright". - legendHoriz - whether to make legend horizontal. Default is FALSE. - legendCex - numeric values by which legend should be magnified relative to 1. - title - plot main title. Default is NULL (no title) Details¶ For each windowSize, the numbers of sequences filtered or remaining after applying the sliding window approach are plotted on the y-axis against thresholds on the number of mutations in a window on the x-axis. When plotting, a user-defined amount of jittering can be applied on values plotted on either axis or both axes via adjusting jitter.x, jitter.y, jitter.x.amt and jitter.y.amt. This may be help with visually distinguishing lines for different window sizes in case they are very close or identical to each other. If plotting percentages ( percentage=TRUE) and using jittering on the y-axis values ( jitter.y=TRUE), it is strongly recommended that jitter.y.amt be set very small (e.g. 0.01). NA for a combination of mutThresh and windowSize where mutThresh is greater than windowSize will not be plotted. Examples¶ # Use an entry in the example data for input and germline sequence data(ExampleDb, package="alakazam") # Try out thresholds of 2-4 mutations in window sizes of 3-5 nucleotides # on a subset of ExampleDb tuneList <- slideWindowTune(db = ExampleDb[1:10, ], mutThreshRange = 2:4, windowSizeRange = 3:5, verbose = FALSE) # Visualize # Plot numbers of sequences filtered without jittering y-axis values slideWindowTunePlot(tuneList, pchs=1:3, ltys=1:3, cols=1:3, plotFiltered=TRUE, jitter.y=FALSE) # Notice that some of the lines overlap # Jittering could help slideWindowTunePlot(tuneList, pchs=1:3, ltys=1:3, cols=1:3, plotFiltered=TRUE, jitter.y=TRUE) # Plot numbers of sequences remaining instead of filtered slideWindowTunePlot(tuneList, pchs=1:3, ltys=1:3, cols=1:3, plotFiltered=FALSE, jitter.y=TRUE, legendPos="bottomright") # Plot percentages of sequences filtered with a tiny amount of jittering slideWindowTunePlot(tuneList, pchs=1:3, ltys=1:3, cols=1:3, plotFiltered=TRUE, percentage=TRUE, jitter.y=TRUE, jitter.y.amt=0.01) See also¶ See slideWindowTune for how to get tuneList. See jitter for use of amount of jittering.
https://shazam.readthedocs.io/en/version-0.1.11_a/topics/slideWindowTunePlot/
2019-04-18T15:24:06
CC-MAIN-2019-18
1555578517682.16
[array(['../slideWindowTunePlot-2.png', '2'], dtype=object) array(['../slideWindowTunePlot-4.png', '4'], dtype=object) array(['../slideWindowTunePlot-6.png', '6'], dtype=object) array(['../slideWindowTunePlot-8.png', '8'], dtype=object)]
shazam.readthedocs.io
See: Description The job of a "name" in the context of ISO 19103 is to associate that name with an Object. Examples given are objects: which form namespaces for their attributes, and Schema: which form namespaces for their components. A straightforward and natural use of the namespace structure defined in 19103 is the translation of given names into specific storage formats. XML has different naming rules than shapefiles, and both are different than NetCDF. This common framework can easily be harnessed to impose constraints specific to a particular application without requiring that a separate implementation of namespaces be provided for each format. Records and Schemas are similar to a struct in C/C++, a table in SQL, a RECORD in Pascal, or an attribute-only class in Java if it were stripped of all notions of inheritance. They are organized into named collections called Schemas. Both records and schemas behave as dictionaries for their members and are similar to "packages" in Java.
http://docs.geotools.org/latest/javadocs/org/opengis/util/package-summary.html
2019-04-18T14:59:43
CC-MAIN-2019-18
1555578517682.16
[]
docs.geotools.org
Managing Images Using the CLI The cloud operator assigns roles that grant users the ability to upload and manage images. You can upload images through the Metacloud client or the Image service API. The Image service enables users to discover, register, and retrieve virtual machine images. It accepts API requests for disk or server images, and metadata definitions from end users or Metacloud Compute components. It also supports the storage of disk or server images on various repository types, including Object Storage. You can use the openstack client for the image management, which provides mechanisms to list and delete images, set and delete image metadata, and create images of a running instance or snapshot and backup types. Note After you upload an image, you cannot change it. Use the grep command to filter a list for a specific keyword, for example: $ openstack image list | grep 'cirros' | 95d786e3-0a6c-4db1-bc3c-1a184d585ff1 | cirros-0.3.4-x86_64 | active | | d51b539b-a8a8-4274-808f-3dc96b6199a1 | cirros-0.3.3-x86_64 | active | You can use optional arguments with the create and set commands to modify the image properties. The following example demonstrates uploading a CentOS 6.3 image in qcow2 format for public access: $ openstack image create centos63-image \ --disk-format qcow2 \ --container-format bare \ --public --file ./centos63.qcow2 Setting Image Properties To update an existing image with properties that describe the disk bus, the CD-ROM bus, and the VIF model: $ openstack image set <IMAGE> \ --property hw_disk_bus=scsi \ --property hw_cdrom_bus=ide \ --property hw_vif_model=e1000 \ To set the operating system ID or a short-id in image properties: $ openstack image set <IMAGE> --property short-id=fedora23 To set id to a URL: $ openstack image set <IMAGE_ID> \ --property id=
http://docs.metacloud.com/4.6/user-guide/cli-managing-images/
2019-04-18T14:18:41
CC-MAIN-2019-18
1555578517682.16
[]
docs.metacloud.com
Choosing a Flow Pattern When you create any Appcues flow, you will be prompted to select one of four patterns; modals, slideouts, hotspots, or tooltips.. Use modals to welcome and motivate users, announce new features, or give your users a "choose your own adventure" onboarding experience.: Use slideouts to announce new features, collect product feedback, or drive users to specific actions. In the example above, the Projectcu.es team has targeted a slideout to accounts with just one user to encourage them to add team members. Use hotspots to point out smaller UI changes and provide your users with on-demand guidance or tips.. Above is beacon-less tooltip that has a backdrop on. Use tooltips to explain sequences and walkthroughs, or link them with a welcome modal to create an onboarding flow..
https://docs.appcues.com/article/112-choosing-a-pattern
2019-04-18T14:45:03
CC-MAIN-2019-18
1555578517682.16
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/588f8993dd8c8e73b3e922b5/file-kcJwuC6Uk7.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/588f8a172c7d3a7846306ef4/file-5jPPmGvDuW.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/588f94c12c7d3a7846306f7a/file-dfXflVfMNG.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/588f96f92c7d3a7846306fa4/file-gHc2HMXKln.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/588f9baddd8c8e73b3e923a6/file-yR9MsEIoJo.gif', None], dtype=object) ]
docs.appcues.com
Define one or more pricing plans for a product For each application, vendors can define one or more Plans (e.g. silver, gold, platinum, etc.). In short, plans are the product versions that will be available to your customer to choose. In the following picture, you can see an example of the two product versions created for a product: one is the monthly plan and one is the annual plan. Edit a product planEdit a product plan To add plans you need to go to access your personal Control Panel. You will be able to access the "Catalog" from the menu on the left. Then you need to select the product you want to edit, click on it, select "Edit" and go to the "Plans" section. Then, click on the "Add New Plan" button. If you have already created a plan and you want to edit you can do it by selecting a specific Plan, and clicking on the "Edit" button. A pop-up will show up, similar to the one in the following picture: DescriptionsDescriptions The General tab contains the following fields: - Name: this is the name of the plan and will be shown to customers, keep it short and easy to understand. For example good plans could be gold/silver/bronze, basic/pro/enterprise, monthly/yearly - Tag: colored ribbon that is useful to attract customer attention (e.g. 50% off!) - Description: in this field you need to describe the peculiarities of this specific plan - Subheader: a text that is shown on the order summary box on the marketplace - Weight: alter the ordering of plans when they are listed (lower weight will float to the top of lists, while heavier products will sink) - Published (yes/no): when active this plan is available for purchasing on the marketplace - External ID: your identifier for this particular plan (useful to recognise it when consuming API) This is an example configuration of a plan in the General tab: The next tab is Features: here you can define a list of catchy descriptions for this particular plan that can help the customer to choose between different plans of the product: This is how the plans shows up on the marketplace setup order page with the previous example configuration: Pricing modelPricing model The Pricing tab contains the main setting to adjust the billing of a plan: - Auto-renewal (yes/no): when activated, the subscription of the user that chose this plan will automatically renew at the expire date. The customer should request an unsubscribe before the renew take place in order to avoid renewal The Pricing model (dropdown) enables to select from a variety of options: - Renewable every N months: you can adjust billing frequency (how often an invoice is emitted) and minimum order duration (how long a customer is committed to pay). Recurring price is charged at billing frequency, while One-off price is charged only at the first purchase (for further information see this section) - Short duration not renewable: useful for services that are meant to be used on-demand, e.g. a webinar session. - Everlasting: a subscription for this plan doesn't have an expiration date and can be terminated only by customer choice (e.g. a one-shot charge for buying a custom CMS theme) - Contact form only: a pricing for this plan is undefined and the customer cannot place an order for it but can ask for more information to you - Free: this plan is for products free to use (e.g. open-source) The Configurations tab contains additional settings like: - Trial mode: if you want to provide a trial period for this plan to your prospect. For Managed products you are going to be charged for the required cloud resources needed to run the application - Credit card (not) required: you can ask or not the customer to provide a valid credit card (without charging anything) in order to request a trial - Trial length: when trial is enabled, how many days the trial can last. At any time the customer can request to upgrade to a paid plan. Upgrade to paid can happen automatically at the end of trial period if Auto-Renewal is enabled (the customer can unsubscribe without any costs before the end of the trial) - Coupon configuration: whether using coupons is globally available for this plan or not. An option to require a coupon in order to buy a plan is also available - Upgrades available: wheter a subscription of this plan can be upgraded to another one (e.g. upgrades are not feasible or are without meaning) Extra-ResourcesExtra-Resources Extra-resources are goods or services which can be sold together with the product. Examples are a 10-days pack of technical support, some hardware components. In this section it's possible to configure the price ranges for each Extra-Resource. For further information look at the Extra-Resources section. Advanced SettingsAdvanced Settings - Configuration Parameters: enable or disable configuration parameters that should have been already defined at product level. More details on the Configuration Parameters section. - Integration Metadata: a key/value list where you can put custom data, especially useful when developing integration for syndicated applications.
https://docs.cloudesire.com/docs/onboarding-plans.html
2019-04-18T15:17:01
CC-MAIN-2019-18
1555578517682.16
[array(['/docs/assets/catalog/plan-list.png', 'Vendors Control Panel: Plans list'], dtype=object) array(['/docs/assets/catalog/plan-list-action.png', 'Vendors Control Panel: Plans list - actions'], dtype=object) array(['/docs/assets/catalog/plan-edit-descriptions.png', 'plan edit descriptions'], dtype=object) array(['/docs/assets/catalog/plan-edit-descriptions-features.png', 'plan edit features'], dtype=object) array(['/docs/assets/catalog/marketplace-product-detail.png', 'marketplace plans list'], dtype=object) ]
docs.cloudesire.com
Statistics Page Statistics page displays tools allowing to monitor your deployments by visualizing collected system measurements. Page contains Resource and Time Filter widgets and four Metric Graphs widgets with predefined metrics. Resource Filter Resource Filter widget allows you to select specific node instance for monitoring. You can filter by blueprints, deployments and nodes to limit the list of node instances. More about Resource Filter widget you can find here. Time Filter Time Filter widget allows you to define time range for all the graphs displayed on the page. For more about what you can achieve with Time Filter widget see here. Metric Graphs There are 4 Deployment Metric Graph widgets on the page. They display value changes in time of the following metrics: - Memory Free - CPU Total User - CPU Total System - Load Average By changing widget’s configuration you can visualize another metrics. To learn how to do it see here.
https://docs.cloudify.co/4.5.5/working_with/console/statistics-page/
2019-04-18T14:56:25
CC-MAIN-2019-18
1555578517682.16
[array(['../../../images/ui/statisticsPage/statistics-page.png', 'Statistics Page'], dtype=object) ]
docs.cloudify.co
Troubleshooting - The product image is cropped or blurry - Two sets of images upon activation - WooThumbs not working in the Avada theme - The product page layout is now broken - Clear the image cache - The image zoom is not working - WooThumbs not showing in The Gem theme - The zoomed image has shifted down - My vertical thumbnails are cut off at the bottom - There are two sets of images showing in the royal theme - Image not changing in catalog mode - Vertical slider images are cut off or bleeding into the next slide - Featured image is displayed twice in the Kallyas theme
https://docs.iconicwp.com/category/114-troubleshooting
2019-04-18T15:20:42
CC-MAIN-2019-18
1555578517682.16
[]
docs.iconicwp.com
If you want to use PayWhirl on your Shopify store, you have two options. You can either paste your PayWhirl embed code into a page in Shopify, or use the PayWhirl Shopify App. Using the PayWhirl Shopify app will enable you to manage your Paywhirl account from within Shopify. Click the green "Get App" button: Note: There is a Free monthly version available however a PW Pro subscription has reduced transaction rates. Stripe accounts are also required to use PayWhirl. If you don't have one, one will be created upon setup. Enter your Shopify domain when prompted to do so. Allow PayWhirl permission when Shopify asks for it. You will then see this confirmation message that the app was installed. Now its time to make your Plans and configure PayWhirl. You can either configure this through the PayWhirl app or directly at paywhirl.com For help configuring the PayWhirl plans and general setup see Getting Started Linking to Shopify has these benefits and more: Create Products From Plans When you create plans in Paywhirl, they are automatically added as products in your Shopify store. These products have a single variant assigned to them. Create Customers From Subscribers When a subscribers signs up through Paywhirl, they are now automatically added as a customer in your Shopify store. These customer records work just like any other in your store. Create Orders from Subscriptions Whenever a subscriber is successfully charged for a subscription in Paywhirl, we automatically create an order for them in your Shopify store. In addition to your Paywhirl notification emails (if you have them setup to be copied to you), you'll also receive an order notification from Shopify just like any other order that is placed in your store. These can often be read by fulfillment companies such as ShipStation.
https://docs.paywhirl.com/PayWhirl/classic-system-support-v1/shopify-support/v1-adding-the-paywhirl-app-to-shopify
2019-04-18T14:18:18
CC-MAIN-2019-18
1555578517682.16
[array(['https://uploads.intercomcdn.com/i/o/19829722/c8e710efa275ab02f59576ab/note.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12717060/567a3bcca3482a2e96fd7c07/get_app.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12717105/52e09fec36839a1e18e82713/shopiffy_connect.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12717116/3e798cc0bd5aa8a0ecf9686f/permissions.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12717196/33b0de2b646dd196561bb724/connected.png', None], dtype=object) ]
docs.paywhirl.com
As the first step, please check your UI, in case you are in Developer Console view, change it to Classic UI. In the Applications menu, click on Applications then on the Add Application button. The next window should look like below. Platform: Web After that, name and logo is optional. When finished, click on Next. In the meantime open StoiresOnBoard and go to the approved and verified domain configuration dialog. Now go to the SAML Settings page in Okta and fill the GENERAL part, based on the StoriesOnBoard Configure Domain dialog SP MetaData tab. a. (Okta) Single sign on URL == (StoriesOnBoard) Login URL (ACS) b. (Okta) Audience URL (SP Entity ID) == (StoriesOnBoard) EntityID c. (Okta) Default RelyState - this filed should stay empty d. (Okta) Name ID format == (StoriesOnBoard) NameID Format (fixed EmailAdress) e. (Okta) Application username == "Okta username" Now fill the ATTRIBUTE STATEMENTS (OPTIONAL) part like below then click on the Next button: On the next page, please select I'm an Octa customer adding an internal app then click on Finish. Now you will be forwarded to the saved Application Setup page, please click on the link and download the Application Provider Metadata. You can add users or groups who will be able to login to StoriesOnBoard on the Assignments tab. In StoriesOnBoard, please fill the SSO Settings tab in which is on the Configure Domain dialog and click on Save. a. Enable SAML 2.0 Single Sign On checkbox on (keep it on Optional for now) b. First name attribute = "FirstName" c. Last name attribute = "LastName" d. Identity Provider Metadata = the content of the previously downloaded file. After the finished setup, assigned users should be able to login using SSO. IdP and SP initiated login methods are also working, users don't need to have an existing StoriesOnBoard account, it will be created automatically during sign in. Hope this article covered the topic very well, in case you have any further question, please contact our support team via the chat widget in StoriesOnBoard or [email protected].
http://docs.storiesonboard.com/en/articles/2912074-setting-up-sso-with-okta
2019-07-15T20:04:53
CC-MAIN-2019-30
1563195524111.50
[array(['https://downloads.intercomcdn.com/i/o/116684368/7a3e4df2d8d4bbd1159371de/image+%287%29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/116684689/86568b07067a0219ea5f5155/image+%285%29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/116684919/38560aa8231a7c915d7c90f5/image+%286%29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/116693028/b232faf1d8e8d500bb552406/image+%288%29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/116693483/83f492ec73395ff560bea1c7/image+%289%29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/116696119/1b66f2df3d340fd6367f37f1/image+%2810%29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/116700250/3fd70db5802149e6aa61bec3/image+%2811%29.png', None], dtype=object) ]
docs.storiesonboard.com
Below. Explory A free mobile-storytelling app that allows you to create engaging interactive stories in minutes. Easily combine photos, video, text, music and narration without the time and effort of video editing. A Kickstarter-funded project. Frame Trail FrameTrail is an open source software that allows you to experience, manage and edit interactive video directly in your web browser. It enables hyperlinking filmic contents, including additional multimedia documents (e.g. text overlays, images or interactive maps) and adding supplementing materials (annotations) at specific points. Exposure Primarily aimed at photographers, Exposure allows you to create a blend of text narratives and photographs within a drag and drop framework. Works as a desktop app with a freemium publishing model. Interlude Treehouse An online, interactive video authoring site with little software for the user to download an free for non-commercial use! Commercial uses are expected, and pricing for them is determined on a case-by-case basis. Simple to use for basic projects, recommend for interactive music videos. The Korsakow System Interactive editing & publishing application dedicated to creative storytellers. It was developed by Honkyonk Films and has both free and premium plans available. Odyssey.js An open-source tool that utilizes maps to help turn data into interactive multimedia stories without the user needing coding skills. Pageflow Open source multimedia authoring tool (German) Popcorn Maker Created by Mozilla, this was the first free HTML5 authoring tool to become available in 2011. A lot of projects have used the framework and regular Popathon labs are run internationally – find out more. SIMIILIE Widgets: Timeline Part of the the SIMILIE widgets collection; an open-source “spin-off” from the SIMILE project at MIT. They offer free, open-source web widgets, mostly for data visualizations. They are maintained and improved over time by a community of open-source developers. With Timeline you can create an interactive web-based widget for visualising temporal data. Storehouse A free visual storytelling app for the iPad that combines photos, videos, and text. Publish your stories for friends and followers, or share them by email, Facebook, or Twitter. Explore stories created by your friends and the Storehouse community of storytellers from all around the world. TimelineJS Developed by Northwestern University Knight Lab, TimelineJS is an open-source tool that enables anyone to build visually,rich, interactive timelines. Beginners can create a timeline using nothing more than a Google spreadsheet. Experts can use their JSON skills to create custom installations, while keeping TimelineJS’s core look and functionality. Racontr French HTML5 authoring software, used for the Tribeca Hacks event at Cross Video Days 2014. Zeega Very basic, but fun and easy to use. Produced by the people behind Mapping Main Street (launched 2012) see a video here. i-Docs hackathons, development labs and residencies Kel O’Neill (Director, Empire) has also put together a collaborative list of i-Docs hackathons, development labs and residencies – feel free to use, add and share. Think we’re missing something? Contact us at [email protected] or leave a comment below.
http://i-docs.org/2014/07/15/interactive-documentary-tools/
2019-07-15T20:28:34
CC-MAIN-2019-30
1563195524111.50
[]
i-docs.org
Many. These resources can be stored in any of three locations in an organization. - API proxy revision: When stored in an API proxy, resources are available only to the revisions of that API proxy in which the resources are included. For example, you might include a JavaScript resource with version 1 of an API proxy, then change the implementation to use a Python script in version 2 of the proxy. Version 1 has access to only the JavaScript resource, and version 2 has access to only the Python resource. - Environment: When stored in an environment (for example, test or prod), resources are available to any API proxy deployed in the same environment. - Organization: When stored in an organization, resources are available to any API proxy deployed in any environment. Edge pricing and features.) py: Python scripts, referenced by policies of type Python. Resources must be implemented in "pure Python" (in the Python language only). (Available in Apigee Edge plans only. See Edge pricing and features.) hosted: Node.js files to deploy to Hosted Targets. You can deploy Node.js as Edge backend target applications. The repositories are available at the following URIs, as described in the Resource files API API: /organizations/{org_name}/resourcefiles /organizations/{org_name}/environments/{environment_name}/resourcefiles /organizations/{org_name}/apis/{api_name}/revisions/{revision_number}/resources} -u email The following request lists all JavaScript resources at the organization level: $ curl -u email The following request lists all JavaScript resources at the environment level, in the environment called prod: $ curl -u email The following request lists all JavaScript resources in an API proxy revision (the most specific level): $ curl -u email Each request returns a list of resource names. Sample response: { "resourceFile" : [ { "name" : "genvars-pw.js", "type" : "jsc" }, { "name" : "genvars-refresh.js", "type" : "jsc" }, { "name" : "getvars.js", "type" : "jsc" } ] } Populating resource repositories.headers["RequestPath"] = context.headers["RequestPath"] = context.getVariable("proxy.basepath");' \ \ -u email revision scope (in the proxy bundle's /resources/node directory). In the management UI API proxy editor, adding the Node.js resource to the Scripts section accomplilshes this. So does using the management API (import and update) to store the resource at the API proxy revision scope. Adding Java resources You can add compiled Java resources as JAR files using multiple options in cURL, such as -T, --data-binary, or -F option (not the -d option). For example: curl -v -u email -H "Content-Type: application/octet-stream" \ -X POST --data-binary @{jar_file} \ "http://{mgmt_server}:{port}/v1/organizations/{org}/resourcefiles?name={jar_file}&type=java" curl -v -u email -H "Content-Type: application/octet-stream" \ -X POST -T "{jar_file}" \ "http://{mgmt_server}:{port}/v1/organizations/{org}/resourcefiles?name={jar_file}&type=java" curl -v -u email -H "Content-Type: multipart/form-data" \ -X POST -F "file=@{jar_file}" \ "http://{mgmt_server}:{port}/v1/organizations/{org}/resourcefiles?name={jar_file}&type=java" See also - Java best practices: Best practices for API proxy design and development - Java cookbook example: XSL Transform policy Updating and deleting resource files For API proxy revision-scoped resources, you can modify and delete them in the management UI's proxy editor. To update and delete API resources at the environment and organization scopes (as well as the API proxy scope), see: - Update a resource file in an API proxy revision -. Let's say that you have populated the same resource in two different repositories — the organization and the environment: $ curl -X POST -H "Content-type:application/octet-stream" -d \ 'request.headers["RequestPath"] = context.getVariable("proxy.basepath");' \ \ -u email $ curl -X POST -H "Content-type:application/octet-stream" -d \ 'request.headers["RequestPath"] = context.getVariable("proxy.basepath");' \ \ -u email organization.headers["RequestPath"] = context.getVariable("proxy.basepath"); For more information on listing and getting resource files, see Resource files API in the management API.
https://docs.apigee.com/api-platform/develop/resource-files
2019-07-15T20:12:41
CC-MAIN-2019-30
1563195524111.50
[]
docs.apigee.com
This page covers the Settings tab in the Render Settings window. Overview The Settings tab provides system-wide control for various V-Ray features. UI Path ||Render Settings window|| > Settings tab Global Settings The following rollouts change settings globally throughout V-Ray: - Default displacement and subdivision - Controls displacement and subdivision quality. - VRay UI - Customizes the V-Ray user interface. - System - Customizes various system parameters such as memory usage and frame stamps. Chaos Group Telemetry Program The About V-Ray rollout provides information about the installed V-Ray version and access to the Feedback Program Settings. You can change your V-Ray telemetry settings from this window at any time. For more information on Chaos Group's Telemetry Program, see the Chaos Group Telemetry page. Rendering Over a Network The Distributed rendering rollout enables or disables rendering over a network and provides settings for the rendering process.
https://docs.chaosgroup.com/display/VRAY4MAYA/Settings+tab
2019-07-15T20:50:11
CC-MAIN-2019-30
1563195524111.50
[]
docs.chaosgroup.com
The Distributed Rendering Bucket Render Element shows the name of the rendering machine that rendered any particular bucket when distributed rendering is used. Overview The Distributed rendering Bucket Render Element Settings window|| > Render Elements tab > DR Bucket.DR.vrimg). Text Alignment – Sets the alignment of the machine name that displays on each bucket.
https://docs.chaosgroup.com/pages/viewpage.action?pageId=39814409
2019-07-15T20:10:40
CC-MAIN-2019-30
1563195524111.50
[]
docs.chaosgroup.com
- metrics exchange protocol The data centers in a GSLB setup exchange metrics with each other through the metrics exchange protocol (MEP), which is a proprietary protocol for NetScaler NetScaler appliances. Note: You cannot configure a GSLB site IP address as the source IP address for site metrics exchange. NetScaler appilance to interact with a non NetScaler load balancing device. The NetScaler appliance can monitor non NetScaler. Enable site metrics exchange Site metrics exchanged between the GSLB sites include the status of each load balancing, or content switching virtual server, the current number of connections, the current packet rate, and current bandwidth usage information. The NetScaler To set a time delay by using the GUI - Navigate to Configuration > Traffic Management > GSLB > Change GSLB Settings. - In the GSLB Service State Delay Time (secs) box, type the time delay in seconds. Enable persistence information exchange You can configure NetScaler NetScaler. Configure metrics exchange protocol
https://docs.citrix.com/en-us/netscaler/12/global-server-load-balancing/configuring-metrics-exchange-protocol.html
2019-07-15T21:10:17
CC-MAIN-2019-30
1563195524111.50
[]
docs.citrix.com
Rename Tables (Database Engine) SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse Rename a table in SQL Server or Azure SQL Database. To rename a table in Azure SQL Data Warehouse or Parallel Data Warehouse, use the t-sql RENAME OBJECT statement. Caution Think carefully before you rename a table. If existing queries, views, user-defined functions, stored procedures, or programs refer to that table, the name modification will make these objects invalid. In This Topic Before you begin: Limitations and Restrictions To rename a table, using: SQL Server Management Studio Before You Begin. Using SQL Server Management Studio table. USE AdventureWorks2012; GO EXEC sp_rename 'Sales.SalesTerritory', 'SalesTerr'; For additional examples, see sp_rename (Transact-SQL). Feedback
https://docs.microsoft.com/en-us/sql/relational-databases/tables/rename-tables-database-engine?view=sql-server-2017
2019-07-15T21:44:25
CC-MAIN-2019-30
1563195524111.50
[]
docs.microsoft.com
A topic selector identifies one or more topics. Topic selector objects are only used by the local client library and are immutable. For more information on topic selector evaluation see the Topic Selectors guide. Compares the receiver to the given topic selector. YESif the topic selector is equal to the receiver, otherwise NO. Evaluate this receiver against a topic path. YESif the receiver selects the topicPath. A convenience wrapper around topicSelectorWithAnyOf:. expressionsdo Create a selector that matches if any of the provided selectors match. Return a topic selector object initialized with the given expression. The expression association with the receiver. The topic path prefix from this selector pattern. Returns the largest fixed topic path that begins the selector expression. For path selectors, this is the entire path. For split pattern or full pattern selectors, this is a topic path up to, but not including, the first part of the path that contains a regular expression. For selector sets, this method will return the largest prefix that is common to all included selectors. If there is no common prefix, an empty string will be returned.
https://docs.pushtechnology.com/docs/6.2.0/apple/interface_p_t_diffusion_topic_selector.html
2019-07-15T21:05:34
CC-MAIN-2019-30
1563195524111.50
[]
docs.pushtechnology.com
3. llvmlite.ir – The IR layer¶ The llvmlite.ir module contains classes and utilities to build the LLVM Intermediate Representation of native functions. The provided APIs may sometimes look like LLVM’s C++ APIs, but they never call into LLVM (unless otherwise noted): they construct a pure Python representation of the IR. See also To make use of this module, you should be familiar with the concepts presented in the LLVM Language Reference. - 3.1. Types - 3.2. Values - 3.3. Modules - 3.4. IR builders - 3.4.1. Instantiation - 3.4.2. Properties - 3.4.3. Utilities - 3.4.4. Positioning - 3.4.5. Flow control helpers - 3.4.6. Instruction building - 3.5. Example
http://llvmlite.readthedocs.io/en/latest/ir/index.html
2017-06-22T16:29:39
CC-MAIN-2017-26
1498128319636.73
[]
llvmlite.readthedocs.io
Applies To:) The Web Server (IIS) role in Windows Server 2016 provides a secure, easy-to-manage, modular, and extensible platform for reliably hosting websites, services, and applications. With IIS, you can share information with users on the Internet, an intranet, or an extranet. IIS is a unified web platform that integrates IIS, ASP.NET, FTP services, PHP, and Windows Communication Foundation (WCF). For more information, see Web Server (IIS) Overview.
https://docs.microsoft.com/en-us/windows-server/networking/core-network-guide/cncg/server-certs/deploy-server-certificates-for-802.1x-wired-and-wireless-deployments
2017-06-22T17:14:24
CC-MAIN-2017-26
1498128319636.73
[]
docs.microsoft.com
User Guide - Phone - Messages - Passwords and security - Media - Maps and locations - Applications and features - Remember - Collecting and organizing tasks, notes, and more with the Remember app - Adding a folder or an entry to the Remember app - Changing a Remember folder or entry - Delete a folder or an entry in the Remember app - Viewing and searching your Remember entries - Troubleshooting: Remember app - Calendar - Contacts - Clock - Calculator - Browser - Smart Tags - Voice Control - Games - Organizing apps - Documents and files - Settings and options Home > Support > BlackBerry Manuals & Help > BlackBerry Manuals > BlackBerry Smartphones > Porsche Design from BlackBerry > User Guide Porsche Design P'9982 smartphone from BlackBerry - 10.2 Viewing and searching your Remember entries Related information Next topic: Search for an entry in the Remember app Previous topic: Delete a folder or an entry in the Remember app Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/57147/mwa1372440329088.jsp
2014-10-20T11:31:56
CC-MAIN-2014-42
1413507442497.30
[]
docs.blackberry.com
Compatibility Matrix ... Plugin ... 0.1 ... 0.2 ... 0.2.1 ... 0.3 ... Description /. Image Added: Image Added After you press "Link to JIRA", a new review comment is added on the issue: you can see the link to the newly-created JIRA ticket. Image Added And the corresponding JIRA looks like: Image Added Requirements. ... Usage Mandatory Properties At may also want to share this filter with your team. Specify the sonar.jira.url.paramproperty for the project or module: this is the name of an issue filter that you have previously created on JIRA . Changelog ..., ... ...
http://docs.codehaus.org/pages/diffpages.action?pageId=121210242&originalId=229740493
2014-10-20T11:27:08
CC-MAIN-2014-42
1413507442497.30
[]
docs.codehaus.org
? The most efficient way to do this is with two org.mortbay.jetty.Server instances. There is also a second, less efficient alternative.. Alternative:
http://docs.codehaus.org/pages/viewpage.action?pageId=139165864
2014-10-20T11:34:17
CC-MAIN-2014-42
1413507442497.30
[]
docs.codehaus.org
.soap11; 18 19 import java.util.Locale; 20 21 import org.springframework.ws.soap.SoapFault; 22 23 /** 24 * Subinterface of <code>SoapFault</code> that exposes SOAP 1.1 functionality. Necessary because SOAP 1.1 differs from 25 * SOAP 1.2 with respect to SOAP Faults. 26 * 27 * @author Arjen Poutsma 28 * @since 1.0.0 29 */ 30 public interface Soap11Fault extends SoapFault { 31 32 /** Returns the locale of the fault string. */ 33 Locale getFaultStringLocale(); 34 35 }
http://docs.spring.io/spring-ws/sites/2.0/xref/org/springframework/ws/soap/soap11/Soap11Fault.html
2014-10-20T11:49:39
CC-MAIN-2014-42
1413507442497.30
[]
docs.spring.io
Documentation Trend Micro™ Threat Discovery Appliance is available both as a device and as a virtual application installed on a VMware server. The following terminology is used throughout the documentation: The product documentation consists of the following: The Quick Start Guide, Administrator’s Guide, and readme file are available in the Threat Discovery Appliance Solutions CD and at the following Web site:
http://docs.trendmicro.com/all/ent/tms/v2.5/en-us/tda_2.5r2_olh/tda_ag/preface/documentation.htm
2014-10-20T11:19:34
CC-MAIN-2014-42
1413507442497.30
[]
docs.trendmicro.com
Client applications written in Java compile and run like other Java applications. Once again, when you start your client application, you must make sure that the VoltDB library JAR file is in the classpath. For example: $ java -classpath "./:/opt/voltdb/voltdb/*" MyClientApp When developing your application (using one of the sample applications as a template), the run.sh file manages this dependency for you. However,n.
http://docs.voltdb.com/UsingVoltDB/RunStartClients.php
2014-10-20T11:22:31
CC-MAIN-2014-42
1413507442497.30
[]
docs.voltdb.com
Table of Contents Enterprise Report Filtering Table of Contents Filtering Inventoried Lists When filtering an inventoried list item filtering can be based on one or more elements of the specific inventoried item. Note that when filtering for multiple elements of a list AND logic is used. For example, this simple policy will inventory "My Inventory" with "common" and either "one" and "four" or "two" and "three". bundle agent example { meta: "tags" slist => { "autorun" }; vars: !host_001:: "slist" slist => { "common", "one", "four" }, meta => { "inventory", "attribute_name=My Inventory" }; host_001:: "slist" slist => { "common", "two", "three" }, meta => { "inventory", "attribute_name=My Inventory" }; } The above policy can produce inventory that looks like this: Adding a filter where "My Inventory" matches or contains common AND one:
https://docs.cfengine.com/docs/3.10/guide-faq-enterprise-report-filtering.html
2020-02-17T04:07:08
CC-MAIN-2020-10
1581875141653.66
[array(['inventoried-list-items.png', 'inventoried list items'], dtype=object) array(['filter-inventoried-list-items.png', 'inventoried list items'], dtype=object) ]
docs.cfengine.com
Set up streaming A modular input can stream data to Splunk treats data that is streamed from scripted inputs. In simple mode, Splunk treats the data much like it treats data read from a file. For more information on streaming from scripted inputs, refer to Scripted inputs overview in this manual. In simple streaming mode, Splunk supports all character sets described in Configure character set encoding XML streaming mode With the Modular Inputs feature, new with Splunk 5.0, there is a new way to stream XML data to Splunk. Splunk provides default values for the following parameters when streaming events. If Splunk does not find a definition for these parameters in inputs.conf files, Splunk Splunk, you do not want to break events, and instead let Splunk interpret the events. You typically send unbroken data in chunks and let Splunk apply line breaking rules. You may want to stream unbroken events either because you are streaming a known format to Splunk, or you may not know the format of the data and you want Splunk to interpret it. The S3 example in this document streams unbroken events in XML mode. Use the <time> tag when possible When streaming unbroken events, Splunk to flush the data from its buffer rather than wait for more data before processing it. For example, Splunk may buffer data that it has read, waiting for a newline character before processing the data. This prevents the data from being indexed until the newline character is read. If you want Splunk to index the data!
https://docs.splunk.com/Documentation/Splunk/6.2.15/AdvancedDev/ModInputsStream
2020-02-17T03:47:26
CC-MAIN-2020-10
1581875141653.66
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Tools > Get External Data > Edit Attachments Displays the Edit Attachments dialog box, which allows you to add, edit, or remove files associated with the selected object in the open project. Changes made using this command also alter the entries on the Get External Data menu. The file associations are saved to the Microsoft Access Get External Data database file for the project. You can attach virtually any file type to an object in the project, such as Microsoft Word, Excel, and PowerPoint files; ASCII text files from Notepad or from the Smart Review Text window; MicroStation files, sound, animation, and Video Engine files. Executable files and their associated files must exist before you attach them. You can also correlate and attach one or more Smart 3D drawing (.pid or .sha) files to 3D objects in the model with Auto-Attach. For more information, see Auto-Attach Smart 3D Drawings. Attach External Data to a Project Select a graphic object in the model. Click Tools > Get External Data > Edit Attachments. If you select an object in the project that contains no property data and no attached data, the Edit Attachments button is not active. Edit Attachments Dialog Box Click Insert on the Edit Attachments dialog box, and then select the label data property for the attachment. To create a unique mapping between an object and document attachments, select Linkages. Otherwise, you can have the same document attachments associated to multiple objects. For example, if you select the object Name for the attachment, then all objects with the same value for Name also have the attachment. Click the added row containing the selected property to display the Edit Filename / Argument dialog box. Browse to the data file to attach, specify any Arguments for the data file, and then click OK. Click OK to close the Edit Attachments dialog box. The Tools > Get External Data menu lists attached documents."> Attaching a file to a property value of one element automatically attaches the file to every element with the same attribute value. To remove an attachment, select the attachment in the list, and then click Delete on the Edit Attachments dialog box. This removes the attachment from the display in Smart Review, but does not delete the actual file.
https://docs.hexagonppm.com/reader/sSwl1N~YZKxGfELuEWj0oQ/TlaXHjoJcNTLd4lM4nBeJg
2020-02-17T03:55:48
CC-MAIN-2020-10
1581875141653.66
[]
docs.hexagonppm.com
Converting .dbp files to .dbproj files With the release of Visual Studio 2010 Beta 2, database projects with the .dbp extension are deprecated. There are several ways to convert your project manually - Import Script. Unfortunately this will shred your scripts for schema objects and move any conditional logic into the scriptsignored file. - Add Existing Item. You could create a Visual Studio 2010 project and use “Add Existing Item” to bring your .dbp scripts into your new project. The downside with this is 1) it is time consuming and 2) you lose the workspace the version control provider needs – therefore loosing history of your files. - “Include In Project”. Create a new Visual Studio 2010 project and drag your .dbp project folder structure on the top of the .dbproj file. After your finished select “Show All Items” from the Visual Studio 2010 Solution Explorer and then “Include In Project” for your .dbp files. Unless you create the new .dbproj project over the top of your .dbp project you’ll lose version control history. DbpToDbproj.exe To help Database Project (.dbp) customers with the upgrade to Visual Studio 2010 I’ve written DbpToDbProj. The source and executable is located here. This project builds a command-line executable with parameters allowing you to create either a Visual Studio 2010 2005DSP or 2008DSP project, and indicate if the file structure for the ‘Schema Objects’ folder is ‘by schema’ or ‘by type’. 1: c:\Temp>DbpToDbproj.exe /? 2: Database projects that have the file extension .dbp have been deprecated in Visual Studio 2010 3: DbpToDbproj creates a database project with the .dbproj extension to replace the dbp. 4: 5: DbpToDbproj [drive:][path]filename [/target:[2005|2008]] [/DefaultFileStructure:[ByType | BySchema]] 6: [drive:][path]filename 7: Specifies drive, directory and filename for the dbp file to be converted. 8: 9: /target:[2005|2008] 10: Specifies whether the .dbproj to be created is a Sql2005 or a Sql2008 project. 11: 12: /DefaultFileStructure:[ByType | BySchema]] 13: Specifies whether the directory structure of the .dbproj should be by object schema or type. Specifies whether the directory structure of the .dbproj should be by object schema or type. As an example, here’s a .dbp project from Visual Studio 2008: Running DspToDbproj in the same directory as your existing .dbp project will create a file called ‘<dbpfilename>.dbproj’. 1: C:\Temp\dbpProject\dbpProject>DbpToDbproj.exe dbpProject.dbp 2: Converting... 3: C:\Temp\dbpProject\dbpProject\dbpProject.dbp 4: ...to... 5: C:\Temp\dbpProject\dbpProject\dbpProject.dbproj 6: Successfully converted dbp to C:\Temp\dbpProject\dbpProject\dbpProject.dbproj When the .dbproj file is opened you’ll receive an upgrade prompt. The reason for this is that my command-line utility does not interact with your Version Control system. Instead I place an MSBuild property in the dbproj file called “PostUpgradeAddToSCC”. This property is a flag to the database project system that upgrade has added new files and that these new files should be added to SCC. This only happens after the upgrade process so I had to make the system think an upgrade was necessary. If you don’t use version control you can edit Program.cs and remove the line where I create the “PreviousProjectVersion” MSBuild property. The absence of this property will prevent the upgrade wizard from appearing. After opening the .dbproj file the following project will appear in the solution. Notice that the script files are maintained as they were in the .dbp project. Each of these script files have a build action of ‘Not In Build’, meaning they do not contribute to the dbproj model. It also creates the standard “Schema Objects”, “Data Generation Plans”, and “Schema Comparisons” folders you’d get with any other Visual Studio 2010 project. If you’re using the new Database Project (.dbproj) project for purely version control you can always delete these directories – otherwise you’ll still be able to import from an existing database or a script. DspToDbproj will preserve the connections found in the .dbp by writing them into a file called Connections.txt. From these connection strings you should be able to recreate the connection in the Server Explorer manually. Conclusion Hopefully this will make the conversion to Visual Studio 2010 a little easier for everyone with .dbp projects. Once again the source and executable is located here. If you have any questions please visit our forums here or ping me through this blog. Thanks! -- Patrick
https://docs.microsoft.com/en-us/archive/blogs/psirr/converting-dbp-files-to-dbproj-files
2020-02-17T05:08:55
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Templates¶ Templates form a part of orcharhino’s provisioning setup. (See also: Provisioning Setup). Templates are used to generate the scripts used during the installation of new hosts. They make heavy use of parameterisation, such that general templates can be turned into installation scripts for particular hosts, by substituting the appropriate parameters. In addition to provisioning, templates are also used by orcharhino’s remote execution features. (See also: Remote Execution Guide). There are several different types of templates, organized into several different pages of the web interface:. (See also: Viewing and/or Editing a Template). Viewing and/or Editing a Template¶ The template window (for a given template) can be reached by clicking on the name of a template in some list of templates (either on the Partition Tables page, the Provisioning Templates page, or the Job Templates page): Hosts > Partition tables >> name of a template Hosts > Provisioning templates >> name of a template Hosts > Job templates >> name of a template The template window exists both to display information on some template, as well as to edit it. It is organized into several tabs: The Help tab displays inbuilt documentation on template syntax (see below). The following screen shot shows (part of) the Help tab:
https://docs.orcharhino.com/sources/management_ui/the_hosts_menu/templates.html
2020-02-17T04:28:15
CC-MAIN-2020-10
1581875141653.66
[array(['../../../_images/template_window_tabs.png', 'The tabs of the template window'], dtype=object) array(['../../../_images/template_window-help_tab.png', 'The build-in template help window'], dtype=object)]
docs.orcharhino.com
>> create_empty=trueand there are no results, creates a 0 length file. When create_empty the $SPLUNK_HOME/var/run/splunk/csvdirectory. Directory separators are not permitted in the filename. Filenames cannot contain spaces.. - singlefile - Syntax: singlefile=<bool> - Description: If singlefile=trueand the output spans multiple files, collapses the output The leading underscore is reserved for names of internal fields such as _raw and _time. By default, the internal fields _raw and _time are included in the search results in Splunk Web. When the outputcsv command is used in the search, there are additional internal fields that are automatically added to the CSV file. The most common internal fields that are added are: - _raw - _time - _indextime To exclude specific internal fields from the output, you must specify each field separately using the fields command. Specify the fields command before the outputcsv command.. The negative symbol ( - ) specifies to remove the fields. For example, to remove all internal fields, you specify: ... | fields - _* | outputcsv MyTestCsvFile To exclude specific internal fields from the output, you must specify each field separately. For example: ... | fields - _raw - _indextime _sourcetype _subsecond _serial | outputcsv MyTestCsvfile See also Answers Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has using the output!
https://docs.splunk.com/Documentation/Splunk/6.5.2/SearchReference/Outputcsv
2020-02-17T04:42:58
CC-MAIN-2020-10
1581875141653.66
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Activation DMS activation operations. More... Detailed Description DMS activation operations. Function Documentation Activate this device with the specified product. Once activated, device updates with get this product's firmware. Before the device can activate to a product it needs to be 'claimed' by your DMS account. The device is automatically claimed when developing with the SDK. In the field, a device should be claimed during manufacturing. See the ZentriOS command documentation for more information: dms - Note - The server_msgargument, a zos_buffer_t, should be pre-configured to point to a buffer that will be populated with a message from the server. The .datamember should point to a string buffer. The sizeto be the size of the string buffer. - Parameters - - Returns - Result of API, see zos_result_t
https://docs.zentri.com/zentrios/wz/latest/sdk/group-api-dms-register
2020-02-17T05:17:58
CC-MAIN-2020-10
1581875141653.66
[]
docs.zentri.com
infix eqv Documentation for infix eqv assembled from the following types: language documentation Operators (Operators) infix eqv sub infix:<eqv>(Any, Any) Equivalence operator. Returns» class ObjAt (ObjAt)»
https://docs-stage.perl6.org/routine/eqv
2020-02-17T04:51:12
CC-MAIN-2020-10
1581875141653.66
[]
docs-stage.perl6.org
use data sources you’re already familiar with. Generators help introspect data stores and data execution frameworks (such as airflow, Nifi, dbt, or dagster) to describe and produce batches of data ready for analysis. This enables fetching, validation, profiling, and documentation of your data in a way that is meaningful within your existing infrastructure and work environment. DataContexts use a datasource-based namespace, where each accessible type of data has a three-part normalized data_asset_name, consisting of datasource/generator/generator_asset. The datasource actually connects to a source of data and returns Great Expectations DataAssets connected to a compute environment and ready for validation. The Generator knows how to introspect datasources and produce identifying “batch_kwargs” that define particular slices of data. The generator_asset is a specific name – often a table name or other name familiar to users – that generators can slice into batches. An expectation suite is a collection of expectations ready to be applied to a batch of data. Since in many projects it is useful to have different expectations evaluate in different contexts–profiling vs. testing; warning vs. error; high vs. low compute; ML model or dashboard–suites provide a namespace option for selecting which expectations a DataContext returns. A Great Expectations DataContext describes data assets using a three-part namespace consisting of datasource_name, generator_name, and generator_asset. To run validation for a data_asset, we need two additional elements: a batch to validate; in our case it is a file loaded into a Pandas DataFrame an expectation_suite to validate against In many simple projects, the datasource or generator name may be omitted and the DataContext will infer the correct name when there is no ambiguity. The DataContext also provides other services, such as storing and substituting evaluation parameters during validation. See DataContext Evaluation Parameter Store for more information. See the Data Context Reference for more information.
https://docs.greatexpectations.io/en/latest/features/data_context.html
2020-02-17T03:55:17
CC-MAIN-2020-10
1581875141653.66
[array(['../_images/data_asset_namespace.png', '../_images/data_asset_namespace.png'], dtype=object)]
docs.greatexpectations.io
TestFlight iOS App Distribution# This guide shows how to configure iOS app distribution from Semaphore to Apple TestFlight using Fastlane. For an introduction to building iOS apps on Semaphore, see the iOS tutorial. First, make sure to configure your project to use Fastlane, Match and code signing by following the Code signing for iOS projects guide. To publish to TestFlight, create a separate Fastlane lane where you'll invoke the appropriate commands: # fastlane/Fastfile platform :ios do desc "Submit the app to TestFlight" lane :release do match(type: "appstore") gym pilot end end For the whole process to work, make sure you've configured environment variables required for match and pilot. Namely, the URL for match's certificate repository, the encryption password for it, and Apple ID for logging in to the Apple Developer portal and submitting a new build. This is described in detail in the Code signing for iOS projects guide. In your Semaphore CI/CD configuration, you can now use bundle exec fatlane build command in a job: # .semaphore/semaphore.yml version: v1.0 name: Semaphore iOS example agent: machine: type: a1-standard-4 os_image: macos-mojave-xcode10 blocks: - name: Submit to TestFlight task: env_vars: - name: LANG value: en_US.UTF-8 secrets: - name: fastlane-env - name: ios-cert-repo prologue: commands: - checkout - bundle install --path vendor/bundle jobs: - name: Fastlane build commands: - bundle exec fastlane release Semaphore maintains an example open source iOS project with working Fastlane and Semaphore configuration for your convenience.
https://docs.semaphoreci.com/use-cases/testflight-ios-app-distribution/
2020-02-17T04:13:49
CC-MAIN-2020-10
1581875141653.66
[]
docs.semaphoreci.com
Logging conversions through the API If you need to log conversions for actions that are not supported by default (see Logging conversions on your website), you can write your own custom code and use the API to log conversions. This allows you to monitor any type of activity performed by users on your website and view the results using the conversion reports available in the Web analytics application. Use the following code to log conversions via the API: using CMS.WebAnalytics; using CMS.DocumentEngine; using CMS.SiteProvider; using CMS.Localization; ... string siteName = SiteContext.CurrentSiteName; string aliasPath = DocumentContext.CurrentAliasPath; // Checks that web analytics are enabled in the site's settings. // Confirms that the current IP address, alias path and URL extension are not excluded from web analytics tracking. if (AnalyticsHelper.IsLoggingEnabled(siteName, aliasPath)) { // Logs the conversion according to the specified parameters. HitLogProvider.LogConversions(siteName, LocalizationContext.PreferredCultureCode, ConversionName, 0, 1, ConversionValue); } Log conversions using the HitLogProvider class from the CMS.WebAnalytics namespace, specifically the following method: LogConversions(string siteName, string culture, string objectName, int objectId, int count, double value) - siteName - sets the code name of the site for which the conversion is logged. - culture - sets the culture code under which the conversion is logged. - objectName - specifies the code name of the conversion that is logged. - objectId - specifies the ID of the conversion. You can set this parameter to 0 if you specify a valid code name in the objectName parameter. - count - sets the amount of conversion hits that the method logs. This parameter is optional. The default value (1) is used if not specified. - value - specifies the value logged for the conversion. In addition to logging a general conversion, this method checks if the current user has passed through a page with a running A/B or Multivariate test.. If this is the case, the method automatically logs the conversion within the appropriate context and includes the conversion in the statistics of the given test. Integrating the custom code into your website There are several possible ways to include your custom conversion code into the website's functionality: - When tracking conversions on a specific page, you can use a custom user control or web part to ensure that the system executes the code as required. - To log actions that may occur anywhere on the website, utilize global event handlers.
https://docs.kentico.com/k9/custom-development/miscellaneous-custom-development-tasks/web-analytics-api/logging-conversions-through-the-api
2020-02-17T04:37:08
CC-MAIN-2020-10
1581875141653.66
[]
docs.kentico.com
.NET Runtime for Azure Mobile Services – First Impressions It’s been now about a month since the .NET runtime for Azure Mobile Services was released, but I haven’t seen a post comparing it with what we had before (a node.js runtime). I think that’s one good way to understand what the .NET backend gives, and what it lacks compared to the node.js runtime, and therefore shed some light on which scenarios each runtime is better suited for. In this post I’ll go briefly about many concepts in the two backend implementations, where they’re similar and where they’re different. Before I start, however, I’d like to point out that the .NET runtime is not a simple port of the node.js runtime to .NET. Node and .NET are fundamentally different, and the two backend types reflects that. In many ways, it’s almost like comparing apples to oranges – they’re similar in a way that they’re both fruits, but have different tastes – they’re both good in their own way :) Developer experience Node.js is a scripting language – it’s based on JavaScript after all. The code which a developer would write on the server size reflects that – table, api, scheduled tasks, they’re all defined based on (fairly) self-contained scripts which execute a certain task. That makes for a good developing-inside-portal story which we’ve had so far. .NET is not a scripting platform. Granted, there are some ways of using .NET in a scripting way (scriptcs and edge.js come to mind), but what most people associate with .NET is a way to build applications using executables, DLLs and other such artifacts. So developing for the mobile services .NET backend means building a Visual Studio project and uploading it (via the VS Publishing option or a Git-based push) to Azure. Not exactly a model adequate to portal-based editing, which is why in the first version this is not available (it’s possible that if we get enough requests we’ll add that feature in the future). Another difference in the developer experience: the node.js is written on top of express.js, a web application framework. This is somewhat hidden in the table scripts (which have their own little programming model) but quite clear on the custom APIs, which expose directly the express.js request and response objects. The .NET backend is written on top of the ASP.NET Web API, and that integration is clear in all aspects of the backend, as we’ll cover below. Tables / data model I think that’s the main difference in the two runtimes. In node, the runtime is fairly integrated with SQL Azure, and that makes it for a simple experience with basic scenarios – you create a table in the portal, you’re ready to send requests to it to insert / update / read / delete data. You can define some custom scripts which are executed when such requests arrive, but they’re not really required. In the .NET model tables are classes which inherit from TableController<T> (itself a subclass of Web API’s ApiController), a class which implements some of the boilerplate code for dealing with entities. But you still need to extend that class and implement two things: an initialize method to define the domain manager used by the class, and the actual methods which will receive the requests (POST / GET / PUT / PATCH / DELETE) from the clients – if there is no method (action) that responds to a certain HTTP verb, then requests with that verb cannot be made for that “table”. There will be soon some item templates in Visual Studio which will make this process easier, though, scaffolding through some model to define the table controller with all relevant actions. As far as accessing data, the most common way to access SQL data will be via Entity Framework (EF). If you’re familiar with EF, it should not be hard to start using .NET mobile services. If not, there may be an initial ramp up cost (especially in scenarios with relationships of multiple entities, although once you understand EF better, this scenario is a lot simpler to implement than in the node.js case – a topic for a future post), but the whole power of EF is available in the mobile services, including EF Code First (similar to the way the node.js runtime works, where the runtime itself will create the tables in a schema specific to the mobile service in your database). But since EF is used under the covers, you can also connect the runtime with an existing database, as long as there is a connection string which a context class can access. Another feature in the .NET backend is the ability to choose different domain managers, and use a data layer other than EF. We already shipped NuGet packages that expose domain managers for Azure Table Storage and MongoDB. We should have samples of those storage layers in future posts. Custom APIs That’s an area where the two runtimes are fairly similar. APIs in the node runtime are essentially express.js handlers, in which the request object has some additional properties (user, service) which provide access to the service runtime. Likewise, custom APIs in the .NET runtime are nothing more than ASP.NET Web API controllers – classes derived from ApiController (or to be more specific, classes which implement the IHttpController interface, but the large majority of uses is done via the ApiController class), and in the class implementation if your class has a public property of type ApiServices, the dependency injection of Web API will set the proper instance to let the code access the service runtime. Push notifications In the node.js runtime there are two ways to send push notifications to clients. By default, you can use the push object to access objects that can send notification to individual push notification systems (apns, gcm, mpns, wns). Or you can enable the enhanced push feature, which uses Notification Hubs to simplify common scenarios and remove the need to explicitly manage push channels. The .NET runtime only has the Notification Hubs option – there’s no simple way to push directly to individual push notification systems. You can still use their SDKs (or REST APIs) to do so, but it’s not integrated in the runtime. Local / remote debugging One feature request which we’ve received a few times for mobile services is to make debugging of the service easier. In the node.js runtime, the best way to do that is to add some tracing to the console in parts of the scripts and check the logs after executing the code. In the .NET runtime you can run the service locally and try most of the scenarios in your own machine – simply F5 the project in Visual Studio and a service in localhost will be ready to receive requests and hit any breakpoints you set in the web mobile service application. You can also set up remote debugging, where you can debug the service running live in production, as Henrik explained in his post. A word of caution when doing local debugging: when running in the local machine, the project’s web.config will be used to determine the settings of the application, including keys, connection strings, and many others. When the project is deployed to the Azure Mobile Service, however, the web.config is ignored, as the settings will come from the server and can be set via the portal. Another difference between running the service locally and in production is that the authentication is relaxed locally – HTTP requests don’t need to have keys (application or user), even for tables / actions marked with an AuthorizationLevel.User attribute (that can be changed by calling the SetIsHosted method in the configuration object during initialization). Portal differences The main difference, which surprised some users, is that there is no “data” or “api” tabs in the portal for services with the .NET runtime. Since there’s no script editing for those two features (they’re all controllers, as I mentioned above), all tables and APIs are defined in the service code itself and are not reflected in the portal. Missing features The .NET runtime backend is still only about a month old, so there are some features which have yet to be implemented. The most notable ones are the following: - Client-side login flow, where you use an authentication provider SDK to login the application and then send a token obtained from the application to log in to the mobile service - Azure Active Directory login (server-side flow) - HTML/JS support – there is a bug in the CORS handling which should be fixed soon. There is a workaround described here. - Tables with integer ids – this is actually not entirely a missing feature, since we can define an entity with integer ids (and other types) to be stored in the database, but the type which is exposed in the TableController class needs to have a string id (for the REST API) and we can use a mapping between the two types. Another topic for a future post. Wrapping up That’s it. I hope to provide a quick comparison chart between the two runtimes. As I mentioned, we’re actively working on the .NET runtime (and in the node.js as well, we’re not forgetting about it :) so new features should be coming up soon, and we’ll announce them in the Azure Mobile blog.
https://docs.microsoft.com/en-us/archive/blogs/carlosfigueira/net-runtime-for-azure-mobile-services-first-impressions
2020-02-17T05:04:21
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Docker, Elastic Beanstalk and Git: a useful trinity for agile development? I - Introduction This is the first of a two-part series that demonstrates a pain-free solution a developer could use to transition code from laptop to production. The fictional deployment scenario depicted in this post is one method that can significantly reduce operational overhead on the developer. This series will make use of technologies such as Git, Docker, Elastic Beanstalk, and other standard tools. In this first article, we will walk through a high-level demonstration of the following workflow: - Environment setup - Elastic Beanstalk configuration - Manual environment deployment - Deploying a feature release to local container - Transitioning feature release to test Elastic Beanstalk environments Caveat: This project imposes a deliberately simplified dummy application, release workflow (i.e. no automated tests) and environment layout (just local dev and test) in order to illustrate the key concepts behind running Git, Docker, and Elastic Beanstalk as an integrated unit. Disclaimer: The demonstration code in the corresponding repository is for illustrative purposes only and may not be sufficiently robust for production use. Users should carefully inspect sample code before running in a production environment. Use at your own risk. Disclosure: The idea of a Makefile mechanism to automate container preparation, build, push etc. was inspired by this excellent article by Victor Lin. II - Design Principles The following fundamental design principles will be followed during the course of this adventure: - Simplicity - adhere to the principles of KISS and Occam's Razor - Agility - switching between environments and deploying application releases should use only a single simple shell command - Immutability - consider container images as immutable. This eliminates dependency issues when deploying applications across environments. The local development runtime environment should thus be very close to production. - Automation - Nirvana is a fully automated deployment of application releases triggered by Git workflow events NOTE: Strictly speaking the kernel and container supporting services could differ between hosts; however, the impact on most applications would be minimal given that most dependencies exist within the runtime environment. III Prerequisites This article, and the corresponding demonstration code, has some dependencies on local environment and accounts with Docker, Github and AWS. You will need the following: - Ruby and Python interpreters - Unix "Make" utility - Elastic Beanstalk CLI tools (eb-cli) - Local Git binaries - AWS Account with default Public/Private/NAT VPC configured - AWS IAM user with appropriate policy set - Github account - DockerHub account - Local Docker host (e.g. via Docker Toolbox for OS X) IV - Demonstration Rather than launch right into the details of how to set this up in your own environment, I decided to move that stuff to the Appendices at the end of this article and to dive straight into the demonstration. In order to replicate the demonstration, you need to first successfully install & configure the dependencies as described in Appendix A, and setup a local environment as per Appendix B. Setting the scene Your latest application version is running in production, as a quick check with "eb status" confirms: ~/trinity/master> eb status Environment details for: trinity-prod Application name: trinity Region: us-west-2 Deployed Version: v1_1-1-g5bf2-28 01:14:36.798000+00:00 Status: Ready Health: Green ~/trinity/master> You decide to take a look in your browser, using the "eb open" command: ~/trinity/master> eb open New "feature" request It seems that some extra-terrestrial users (close acquaintances of HAL, I am led to believe) took offense at the rather limited scope of the greeting and made complaints to the customer service team. An issue was raised to this effect and assigned to you. Start work in feature branch Eager to put this issue to bed, you create a feature branch and start work immediately: ~/trinity/master> git checkout -b issue-001 master Switched to a new branch 'issue-001' You make the necessary changes to app.rb and commit: ~/trinity/issue-001> git commit -a -m "Fixed #1 - Greeting message scope offensive to extra-terrestrials" [issue-001 76f9252] Fixed #1 - Greeting message scope offensive to extra-terrestrials 1 file changed, 1 insertion(+), 1 deletion(-) Create new application container Since this is a Dockerized application, you can create a new container image and test this image locally before pushing to remote staging environment. You just need a simple "make" command to build the container and push to Docker hub: ~/trinity/issue-001> make + ++ Building Git archive of HEAD at Docker/trinity.tar... + + ++ Performing build of Docker image djrut/trinity:v1.1-39-ge3b62fe... + Sending build context to Docker daemon 3.69 MB Step 0 : FROM ruby:slim ---> c80da6b5b71b Step 1 : MAINTAINER Duncan Rutland <[email protected]> ---> Using cache ---> 0d47bd3b0475 Step 2 : RUN mkdir -p /usr/src/app ---> Using cache ---> 04d15bc0ba0e Step 3 : WORKDIR /usr/src/app ---> Using cache ---> 2d4736c6ab50 Step 4 : ADD trinity.tar /usr/src/app ---> 00915d05d730 Removing intermediate container f6f88d91ee75 Step 5 : RUN bundle install --deployment ---> Running in 5faed9595c09 [...SNIP...] ~/trinity/issue-001> Test new application container locally Now that you have a new Docker image containing the recent commit, you decide to first perform a quick test on your local Docker host using the eb-cli tool "eb local run" command to spin-up the new container: ~/trinity/issue-001> eb local run v1.1-39-ge3b62fe: Pulling from djrut/trinity 843e2bded498: Already exists [...SNIP...] 3b8cf611759b: Already exists Digest: sha256:c8c32d75e78a240130f4bc559d96d03e384623a127ab2dd17eeeea758e16c3b0 Status: Image is up to date for djrut/trinity:v1.1-39-ge3b62fe Sending build context to Docker daemon 3.734 MB Step 0 : FROM djrut/trinity:v1.1-39-ge3b62fe ---> 3b8cf611759b Step 1 : EXPOSE 80 ---> Running in 3e9cfa2be561 ---> 532e52378fb9 Removing intermediate container 3e9cfa2be561 Successfully built 532e52378fb9 [2015-10-19 20:53:01] INFO WEBrick 1.3.1 [2015-10-19 20:53:01] INFO ruby 2.2.3 (2015-08-18) [x86_64-linux] == Sinatra (v1.4.6) has taken the stage on 80 for development with backup from WEBrick [2015-10-19 20:53:01] INFO WEBrick::HTTPServer#start: pid=1 port=80 You open a browser window and connect to the Docker host IP and port that is running the new application version (in this case,): Success! The new greeting message is working as expected. The next step is to run the new container images in a true AWS test environment to see how this would work in production. Test new application container in test environment A simple "eb create" command is all that is needed to bind this branch (using the --branch_default option) and spin-up this new version into a fresh staging environment in your accounts default VPC: ~/trinity/issue-001> eb create trinity-test-001 --branch_default This time the "eb open" command can be run to fire up a browser window pointing to the test environment: ~/trinity/issue-001> eb open ...and voila! The new application image is running successfully in staging. NOTE: For longer running branches (such as those that wrap entire versions/milestones), this staging environment is persistent and only requires an "eb deploy" to push newer versions, after committing changes and running "make". V - Conclusion During this demonstration, we examined a simplified use-case that enabled an easy-to-use and agile deployment mechanism with immutable application containers. The developer used three simple shell commands ("git commit", "make", and "eb deploy") to build a new immutable container and to push to the appropriate environment. This approach dramatically reduced the likelihood of broken dependencies as application releases are progressed from developer laptop onto to staging and production. In Part II, we will take a deep peek under the covers to examine exactly how we integrated Docker, Elastic Beanstalk and Git to enable the simple example above. Thank-you for your time and attention! Appendix A - Dependencies The following section outlines the steps needed to setup a local environment on Max OS X. Install Homebrew ruby -e "$(curl -fsSL)" Install Python sudo brew install python Install eb-cli sudo pip install eb-cli Install Docker Toolbox Follow the instructions here to install and configure Docker host running in a VirtualBox VM on OS X. NOTE: I had issues with connectivity to the host starting after initial install (I was getting "no route to host"). After some troubleshooting, this was remedied by a restart of OS X. It is not necessary, as some older issues relating to this problem indicate, to create manual NAT table entries Setup Git Most modern Unix variants have the Git package already installed. Follow the instructions here to setup Git. There are some useful instructions here to setup credential caching to avoid having to frequently re-type your credentials. Configure AWS credentials I prefer to populate the .aws/credentials file as follows: [default] aws_access_key_id = [ACCESS KEY] aws_secret_access_key = [SECRET] You need either an IAM role assigned to this user or a containing group that assigned to this user or containing group that has adequate permissions to IAM, EB, EC2, S3 etc... Since this is my playground account, I used a wide open admin policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "*", "Resource": "*" } ] } Caveat: This IAM policy is not recommended for production use, which should utilize a fine-grained IAM policy. Appendix B - Environment Setup There are number steps involved here to get the environment setup, but remember that these are one time actions that you will not need to repeat again unless you need to recreate the environment from scratch. Step 1 - Choose a name for your application You need to create a unique name for your forked version of the trinity application, because Elastic Beanstalk DNS CNAME records must be globally unique. We shall refer to this name as APP_NAME henceforth. Step 2 - Fork & clone Git repository The first step is to fork and clone the demo Git repository. Full details on how do to this can be found here however the basic steps are: - On GitHub, navigate to the djrut/trinity repository - In the top-right corner of the page, click Fork. You now have a fork of the demo repository in your Github account. - Create local clone, substituting your Github USERNAME git clone[USER_NAME]/trinity.git - Create upstream repository to allow sync with original project git remote add upstream Step 2 - Docker Hub setup - Create a Docker Hub account and create a repository for APP_NAME - Edit "Makefile" - Substitute USER value (currently set to "djrut") with your Docker Hub username. - Substitute REPO value (currently set to "trinity") with your newly created APP_NAME - Login to Docker hub (this will permanently store your Docker Hub credentials in ~/.docker/config.json) docker login Step 3 - Initialize Elastic Beanstalk environments NOTE: This step requires either that you have a default VPC configured with public/private NAT configuration or that you explicitly specify the VPC and subnet IDs during Elastic Beanstalk environment configuration step. I use the latter mechanism to supply a previously saved configuration to the "eb create" command. a) Initialize the Elastic Beanstalk Application eb init [APP_NAME] --region us-west-2 --platform "Docker 1.7.1" If this succeeds, you should see a message like "Application [APP_NAME] has been created." b) Create "production" Elastic Beanstalk environment Ensure that you are currently in the up-to-date "master" branch of the application: prompt> git status On branch master Your branch is up-to-date with 'origin/master'. nothing to commit, working directory clean Run the "eb create", substituting APP_NAME for your application name: eb create [APP_NAME]-prod --branch_default You should now see a trail of events as Elastic Beanstalk launches the environment. Here is a snippet from mine: Creating application version archive "v1_1". Uploading trinity/v1_1.zip to S3. This may take a while. Upload Complete.: UNKNOWN Updated: 2015-09-27 19:24:42.760000+00:00 Printing Status: INFO: createEnvironment is starting. INFO: Using elasticbeanstalk-us-west-2-852112010953 as Amazon S3 storage bucket for environment data. INFO: Created security group named: sg-d47aebb0 INFO: Created load balancer named: awseb-e-p-AWSEBLoa-XUW9PIDWF5JH INFO: Created security group named: sg-d27aebb6 INFO: Created Auto Scaling launch configuration named: awseb-e-pi9ycc8gfs-stack-AWSEBAutoScalingLaunchConfiguration-1SUHKGKXB0C01 INFO: Environment health has transitioned to Pending. There are no instances. INFO: Added instance [i-7b176ca0] to your environment. INFO: Waiting for EC2 instances to launch. This may take a few minutes. At this stage, you can safely CTRL-C and wait a few minutes for the environment to be spun up. This takes longer for the first deployment, since the full Docker image needs to be downloaded. Subsequent deployments of newer versions of the application be faster, since only the modified layers of the image need to be downloaded. You can check periodically with "eb status" and wait for "Health: Green" to indicate that all is well: prompt> eb status-27 19:32:43.591000+00:00 Status: Ready Health: Green Finally, there is a handy command "eb open" that opens the current environment in your browser for a quick eye test: eb open
https://docs.rackspace.com/blog/trinity-article-I/
2020-02-17T03:33:31
CC-MAIN-2020-10
1581875141653.66
[array(['https://s3-us-west-2.amazonaws.com/dirigible-images/trinity-prod.png', 'Prod'], dtype=object) array(['https://s3-us-west-2.amazonaws.com/dirigible-images/trinity-local.png', 'Local'], dtype=object) array(['https://s3-us-west-2.amazonaws.com/dirigible-images/trinity-test.png', 'Test'], dtype=object) ]
docs.rackspace.com
[, Debug, DefaultJson,Clone)] pub struct Post { content: String, date_created: String, } pub fn definition() -> ValidatingEntryType { entry!( name: "post", description: "a short social media style sharing of content",()) } }}, links: [ to!( "post", link_type: "comments", validation_package: || { hdk::ValidationPackageDefinition::ChainFull }, validation: | _validation_data: hdk::LinkValidationData| { Ok(()) } ) ] ) }
https://docs.rs/hdk/0.0.43-alpha3/hdk/macro.entry.html
2020-02-17T04:13:41
CC-MAIN-2020-10
1581875141653.66
[]
docs.rs
Private Dependencies# Dependency mangagers like Bundler, Yarn, and Go's module system allow specifying dependencies from private Git repositories. This makes it easier for teams to share code without requiring separate package hosting. Authentication typically happens over SSH. It's possible manage SSH keys using Semaphore's secrets to authenticate to private Git repositories. This article walks you through the process. Create the SSH key# You'll need to generate an SSH key and associate it directly with the project or a user who has access to that project. First, generate a new public/private key pair on your local machine: ssh-keygen -t rsa -f id_rsa_semaphoreci Add the SSH key# Next, connect the SSH key to the project or user. Github Deploy Keys are the easiest way to grant access to a single project. Another solution is to create a dedicated "ci" user, grant the "ci" user access to the relevant projects, and add the key to the user. Regardless of what you use, paste in the contents of id_rsa.semaphoreci.pub into relevant SSH key configuration on GitHub. Create the secret# Now GitHub is configured with the public key. The next step is configure your Semaphore pipeline to use the private key. We'll use secret files for this. Use the sidebar in the web UI or sem CLI to create a new secret from the existing private key in id_rsa_semaphoreci on your local machine: sem create secret private-repo --file id_rsa_semaphoreci:/home/semaphore/.ssh/private-repo This will create the file ~/.ssh/private-repo in your Semaphore jobs. Note: on macOS the home directory is /Users/semaphore. Use the secret in your pipeline# The last step is to add the private-repo secret to your Semaphore pipeline. This will make the private key file available for use with ssh-add. Here's an example: blocks: - name: "Test" task: secrets: # Mount the secret: - name: private-repo prologue: commands: # Correct premissions since they are too open by default: - chmod 0600 ~/.ssh/* # Add the key to the ssh agent: - ssh-add ~/.ssh/* - checkout # Now bundler/yarn/etc are able to pull private dependencies: - bundle install jobs: - name: Test commands: - rake test That's all there is to it. You can use the approach to add more deploy keys to the private-repo secret to cover more projects and reuse the secret across other projects.
https://docs.semaphoreci.com/use-cases/using-private-dependencies/
2020-02-17T04:57:58
CC-MAIN-2020-10
1581875141653.66
[]
docs.semaphoreci.com
Set up a search workflow action In this example, we will be using...!
https://docs.splunk.com/Documentation/Splunk/7.0.10/Knowledge/Setupasearchworkflowaction
2020-02-17T05:09:45
CC-MAIN-2020-10
1581875141653.66
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
If you want to setup zanata-cli from manually (without 0install): - Navigate to zanata-cli on Maven Central. - Download either dist.zipor dist.tar.gz. - Extract the contents of the archive to your location of choice. - Create a symbolic link to the zanata-cliscript in the bin directory of the extracted archive. e.g. from the archive directory, run sudo ln -s --relative bin/zanata-cli /usr/local/bin/zanata-cli. - (optional) you can also enable tab-autocomplete for the client if you use bash as your terminal shell. This can be done by copying or linking the zanata-cli-completionscript from the bin directory to /etc/bash_completion.d/. e.g. ln -s --relative bin/zanata-cli-completion /etc/bash_completion.d/zanata-cli-completion. Nightly Builds If you like to live dangerously, the client nightly relase is available. This may have newer features, but is not guaranteed to be stable. The latest nightly build is available as an archive that can be installed manually. To install the latest nightly build: - Open Client nightly builds. - Open the directory showing the highest version number. - Download either of the distributable archives (ending with -dist.zipor -dist.tar.gz). - Install as per manual installation instructions above.
http://docs.zanata.org/en/release/client/installation/manual-installation/
2020-02-17T04:58:04
CC-MAIN-2020-10
1581875141653.66
[]
docs.zanata.org
Migrating to a Direct Account Note These instructions are tailored to Heroku users, but the information also holds true for Manifold users. Migrating from a Heroku Account to a Direct Account requires minor configuration changes in the Heroku application. This will require redeployment of the application. How It Works: - Sign up for an account at Bonsai.io. Please make sure to add your billing information. - Change your application to use ELASTICSEARCH_URL environment variable rather than BONSAI_URL. - Then configure your application in Heroku with the new ELASTICSEARCH_URL shown below: heroku config:set ELASTICSEARCH_URL=$(heroku config:get BONSAI_URL) When you uninstall the Bonsai add-on, Heroku will remove the BONSAI_URL configuration setting. By redeploying your application to use this different environment variable now, you can remove any downtime for your application in later steps. - Email [email protected] and include: A) the email address associated with your new Bonsai account and B) the cluster(s) that you want migrated over. - We’ll perform the migration. Your cluster URLs and all your data will remain intact. You will be billed at the monthly rate once that migration is complete. We’ll let you know once this step is done. - Once we have confirmed that the migration is complete, remove the Bonsai addon(s) from your Heroku app so you’re not being billed twice! Uninstalling the Bonsai add-on at this step in Elasticsearch will remove the BONSAI_URL. - You can migrate the rest of your application at your convenience. Any cluster we have migrated will now belong to your Bonsai.io account and can be managed there. Please let us know if your application is not functioning as expected. That’s it! Migrations are zero-downtime and take only a few minutes once our support team takes the ticket.
https://docs.bonsai.io/article/191-migrating-to-a-direct-account
2020-02-17T04:47:13
CC-MAIN-2020-10
1581875141653.66
[]
docs.bonsai.io
An Intern's Guide to Windows MultiPoint Server 2011 Hey all! As you can probably tell from my Live ID, I am not Dean Paron or James Duffus. My name is Livi Erickson, and I am a Program Manager intern on the WMS team. Since this is my first time writing a blog post for the TechNet blogs, I thought I'd give a brief introduction: I am a second semester junior majoring in Computer Science (with a minor in math) at Virginia Tech in Blacksburg, Virginia. I've lived in Virginia my entire life, up until last summer when I spent my summer vacation as an Explorer Intern on the MultiPoint team here on the West Coast. My experience last summer was so amazing, I came back as a "full time" intern this summer, and for the past 4 and a half weeks have been doing everything from writing specs to blog posts and managing the official WMS Twitter account (@msmultipoint). I'm writing this particular blog post in regards to a PowerPoint presentation that I threw together about how to set up Windows MultiPoint server. It's a very basic set up, but as someone who has now had to set up 4 or 5 test machines over the past year, I think it might be useful to see if you are considering testing out your own MultiPoint deployment. The planning and deployment guides get into a lot of great detail, but can be slightly intimidating, so attched to this post is a picture-heavy slideshow on how to go through the steps of setting up a very basic MultiPoint deployment. The hardest part is installing the display drivers, but they can normally be found online as long as you know what kind of graphics cards are in your system. We're planning on recording a step by step setup video in the very near future, but until then, you all get to enjoy my PowerPoint with pictures taken from my camera phone. (So glamorous is the life of an intern. :) ) ~Livi Interns_Guide_To_Multipoint.pptx
https://docs.microsoft.com/en-us/archive/blogs/multipointserver/an-interns-guide-to-windows-multipoint-server-2011
2020-02-17T05:14:08
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
TeamFoundationServer.ClientSettingsDirectory Property Gets or sets the directory that contains the client settings files. Namespace: Microsoft.TeamFoundation.Client Assembly: Microsoft.TeamFoundation.Client (in Microsoft.TeamFoundation.Client.dll) Syntax 'Declaration Public Shared Property ClientSettingsDirectory As String public static string ClientSettingsDirectory { get; set; } public: static property String^ ClientSettingsDirectory { String^ get (); void set (String^ value); } static member ClientSettingsDirectory : string with get, set static function get ClientSettingsDirectory () : String static function set ClientSettingsDirectory (value : String) Property Value Type: System.String The directory that contains the client settings files. Remarks May throw an exception if the current identity has never logged in, such as in an impersonation scenario. The returned string will look similar to the following: C:\Documents and Settings\username\Local Settings\Application Data\Microsoft\Team Foundation\3.0 .NET Framework Security - Full trust for the immediate caller. This member cannot be used by partially trusted code. For more information, see Using Libraries from Partially Trusted Code. See Also Reference TeamFoundationServer Class Microsoft.TeamFoundation.Client Namespace
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2013/bb144674%28v%3Dvs.120%29
2020-02-17T04:15:13
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Installing MySQL for PCF Warning: MySQL for PCF v1.10 is no longer supported because it has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version. This topic explains how to install MySQL for Pivotal Cloud Foundry (PCF). Plan your Deployment Network Layout MySQL for PCF supports deployment to multiple availability zones (AZs) on vSphere only. On other infrastructures, specify only one AZ. To optimize uptime, deploy a load balancer in front of the SQL Proxy nodes. Configure the load balancer to route client connections to all proxy IPs, and configure the MySQL service to give bound applications a hostname or IP address that resolves to the load balancer. This eliminates the first proxy instance as a single point of failure. See Configure a Load Balancer below for load balancer configuration recommendations. If you deploy the MySQL service on a different network than Pivotal Application Service (PAS) or Elastic Runtime, configure the firewall rules as follows to allow traffic between PAS (or Elastic Runtime) and the MySQL service. Configure a Load Balancer For high availability, Pivotal recommends using a load balancer in front of the proxies: Configure your load balancer for failover-only mode. Failover-only mode sends all traffic to one proxy instance at a time, and redirects to the other proxy only if the first proxy fails. This behavior prevents deadlocks when different proxies send queries to update the same database row. This can happen during brief server node failures, when the active server node changes. Amazon ELB does not support this mode; see AWS Route 53 for the alternative configuration. Make your idle time out long enough to not interrupt long-running queries. When queries take a long time, the load balancer can time out and interrupt the query. For example, AWS’s Elastic Load Balancer has a default idle timeout of 60 seconds, so if a query takes longer than this duration then the MySQL connection will be severed and an error will be returned. Configure a healthcheck or monitor, using TCP against port 1936. This defaults to TCP port 1936, to maintain backwards compatibility with previous releases. This port is not configurable. Unauthenticated healthchecks against port 3306 may cause the service to become unavailable and require manual intervention to fix. Configure the load balancer to route traffic for TCP port 3306 to the IPs of all proxy instances on TCP port 3306. After you install MySQL for PCF, you assign IPs to the proxy instances in Ops Manager. Add a Load Balancer to an Existing Installation If you initially deploy MySQL for PCF v1.5.0 without a load balancer and without proxy IPs configured, you can set up a load balancer later to remove the proxy as a single point of failure. When adding a load balancer to an existing installation, you need to: - Rebind your apps Ops Manager, configure DNS for your load balancer to point to the IPs that were dynamically assigned to your proxies. You can find these IPs in the Status tab. Configuration of proxy IPs after the product is deployed with dynamically assigned IPs is not well supported. Create an Application Security Group Create an Application Security Group (ASG) for MySQL for PCF. See Creating Application Security Groups for MySQL for instructions. The ASG allows smoke tests to run when you install the MySQL for PCF service and allows apps to access the service after it is installed. Note: The service is not installable or usable until an ASG is in place. Install the MySQL for PCF Tile configure the settings for your MySQL for PCF service, including its service plans. See Configuring MySQL for PCF for instructions. Click Apply Changes to deploy the service.
https://docs.pivotal.io/p-mysql/1-10/installing.html
2020-02-17T04:09:21
CC-MAIN-2020-10
1581875141653.66
[]
docs.pivotal.io
DEPRECATION WARNING This documentation is not using the current rendering mechanism and will be deleted by December 31st, 2020. The extension maintainer should switch to the new system. Details on how to use the rendering mechanism can be found here. CSS-File¶ Own CSS file¶ If you like to adapt the CSS in depth, please use your own CSS file. Include your CSS file like in the snippet below. page.includeCSS { myOwnCSS = fileadmin/myown.css }
https://docs.typo3.org/typo3cms/extensions/startcustomer/stable/03_Integrators/02_Setup/03_CSS/Index.html
2020-02-17T04:52:24
CC-MAIN-2020-10
1581875141653.66
[]
docs.typo3.org
Note When using .NET Framework version 1.1 or earlier (which does not support the SqlBulkCopy class), you can execute the SQL Server Transact-SQL BULK INSERT statement using the SqlCommand object. In This Section Bulk Copy Example Setup Describes the tables used in the bulk copy examples and provides SQL scripts for creating the tables in the AdventureWorks database. Single Bulk Copy Operations Describes how to do a single bulk copy of data into an instance of SQL Server using the SqlBulkCopy class, and how to perform the bulk copy operation using Transact-SQL statements and the SqlCommand class. Multiple Bulk Copy Operations Describes how to do multiple bulk copy operations of data into an instance of SQL Server using the SqlBulkCopy class. Transaction and Bulk Copy Operations Describes how to perform a bulk copy operation within a transaction, including how to commit or rollback the transaction.
https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/bulk-copy-operations-in-sql-server
2020-02-17T04:14:40
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Point Data¶ The pyvista.PolyData object adds additional functionality to the vtk.vtkPolyData object, to include direct array access through NumPy, one line plotting, and other mesh functions. PolyData Creation¶ See Create PolyData for an example on creating a pyvista.PolyData object from NumPy arrays. Empty Object¶ A polydata object can be initialized with: import pyvista grid = pyvista.PolyData(). Initialize from a File¶ Both binary and ASCII .ply, .stl, and .vtk files can be read using PyVista. For example, the PyVista package contains example meshes and these can be loaded with: import pyvista from pyvista import examples # Load mesh mesh = pyvista.PolyData(examples.planefile) This mesh can then be written to a vtk file using: mesh.save('plane.vtk') These meshes are identical. import numpy as np mesh_from_vtk = pyvista.PolyData('plane.vtk') print(np.allclose(mesh_from_vtk.points, mesh.points)) Mesh Manipulation and Plotting¶ Meshes can be directly manipulated using NumPy or with the built-in translation and rotation routines. This example loads two meshes and moves, scales, and copies them. import pyvista from pyvista import examples # load and shrink airplane airplane = pyvista.PolyData(examples.planefile) airplane.points /= 10 # shrink by 10x # rotate and translate ant so it is on the plane ant = pyvista.PolyData(examples.antfile) ant.rotate_x(90) ant.translate([90, 60, 15]) # Make a copy and add another ant ant_copy = ant.copy() ant_copy.translate([30, 0, -10]) To plot more than one mesh a plotting class must be created to manage the plotting. The following code creates the class and plots the meshes with various colors. # Create plotting object plotter = pyvista.Plotter() plotter.add_mesh(ant, 'r') plotter.add_mesh(ant_copy, 'b') # Add airplane mesh and make the color equal to the Y position. Add a # scalar bar associated with this mesh plane_scalars = airplane.points[:, 1] plotter.add_mesh(airplane, scalars=plane_scalars, stitle='Airplane Y\nLocation') # Add annotation text plotter.add_text('Ants and Plane Example') plotter.show(screenshot='AntsAndPlane.png') pyvista.PolyData Grid Class Methods¶ The following is a description of the methods available to a pyvista.PolyData object. It inherits all methods from the original vtk object, vtk.vtkPolyData. Attributes Methods - class pyvista. PolyData(*args, **kwargs)¶ Bases: vtkCommonDataModelPython.vtkPolyData, pyvista.core.pointset.PointSet, pyvista.core.filters.PolyDataFilters Extend the functionality of a vtk.vtkPolyData object. Can be initialized in several ways: Create an empty mesh Initialize from a vtk.vtkPolyData Using vertices Using vertices and faces From a file Examples >>> import pyvista >>> from pyvista import examples >>> import vtk >>> import numpy as np >>> surf = pyvista.PolyData() # Create an empty mesh >>> # Initialize from a vtk.vtkPolyData object >>> vtkobj = vtk.vtkPolyData() >>> surf = pyvista.PolyData(vtkobj) >>> # initialize from just vertices >>> vertices = np.array([[0, 0, 0], [1, 0, 0], [1, 0.5, 0], [0, 0.5, 0],]) >>> surf = pyvista.PolyData(vertices) >>> # initialize from vertices and faces >>> faces = np.hstack([[3, 0, 1, 2], [3, 0, 3, 2]]).astype(np.int8) >>> surf = pyvista.PolyData(vertices, faces) >>> # initialize from a filename >>> surf = pyvista.PolyData(examples.antfile) - property area¶ Return the mesh surface area. - Returns area – Total area of the mesh. - Return type float - property obbTree¶ Return the obbTree of the polydata. An ob. save(filename, binary=True)¶ Write a surface mesh to disk. Written file may be an ASCII or binary ply, stl, or vtk mesh file. If ply or stl format is chosen, the face normals are computed in place to ensure the mesh is properly saved. - Parameters filename (str) – Filename of mesh to be written. File type is inferred from the extension of the filename unless overridden with ftype. Can be one of the following types (.ply, .stl, .vtk) binary (bool, optional) – Writes the file as binary when True and ASCII when False. Notes - Binary files write much faster than ASCII and have a smaller file size.
https://docs.pyvista.org/core/points.html
2020-02-17T03:33:06
CC-MAIN-2020-10
1581875141653.66
[array(['../_images/AntsAndPlane.png', '../_images/AntsAndPlane.png'], dtype=object) ]
docs.pyvista.org
Using which keys to use to access and escape the HTML field toolbar.
https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/administer/accessibility-508-compliance/concept/keyboard-accessibility.html
2018-06-18T03:37:17
CC-MAIN-2018-26
1529267860041.64
[]
docs.servicenow.com
3.1.3 release notes¶ What’s new in 3.1.3¶ Bug Fixes¶ - data migration - Fix getting request in _show_placeholder_for_page on Django 1.8 - Fix template inheritance order - Fix xframe options inheritance order - Fix placeholder inheritance order - Fix language chooser template -
http://django-cms.readthedocs.io/en/release-3.4.x/upgrade/3.1.3.html
2018-06-18T03:51:36
CC-MAIN-2018-26
1529267860041.64
[]
django-cms.readthedocs.io
Explicit roles You. External users must obtain, at minimum, the snc_external role. The snc_external role indicates that the user is external to your organization and should not have any access to resources unless explicitly allowed through ACLs for the snc_external role or additional roles. By default, users with the snc_external role are unable to access non-record type resources as well, such as processors and UI pages. Do not mark the snc_internal role as elevated. Otherwise, internal users cannot access the instance. Note: You can use encryption contexts with the snc_internal and snc_external roles. However, adding encryption contexts to more detailed roles is recommended. Explicit Roles plugin The Explicit Roles (com.glide.explicit_roles) plugin provides the snc_external and snc_internal roles. When this plugin is activated: All existing users are automatically assigned the snc_internal role. This role does not change existing access levels or system behavior. Rather, it provides a category to differentiate internal users from external users. All internal users maintain the same level of access as before the plugin was activated. Newly created users are automatically assigned the snc_internal role when they first attempt to log in to the instance, unless they have been explicitly assigned the snc_external role. You can add the snc_external role to a new user before they first log in to the instance to provide external user rights. Note: The snc_internal and snc_external roles can be added or removed at any time to change user rights. All existing ACLs that do not have a role requirement are automatically assigned the snc_internal role. Because both existing ACLs and roles are assigned the snc_internal role, existing access levels do not change. Newly created ACLs that do not have a role requirement are automatically assigned the snc_internal role. This role assignment does not apply to a newly created ACL with a role assigned. Effective with Istanbul Patch 11: For all existing Processor [sys_processor] records or newly created Processor [sys_processor] records with Type=script, the snc_internal role is automatically added to the Roles field if the field is empty. External users must obtain, at minimum, the snc_external role to access the instance. This role is automatically assigned to external Customer Service Portal contacts. If the Customer Service Portal is not activated, this role must be manually granted to external users. Access to records is granted through ACLs. Content Management System site access is also affected. CMS is set up with Sites (content_site), Pages (content_page), and other resources. Some of the sites may have the Login page configured. If CMS sites do not have the Login page configured, the public role is automatically added to the Read Roles field on Pages (content_page) if the field is empty. If CMS sites have the Login page configured, the snc_internal role is automatically added to the Read Roles field on Pages (content_page) if the field is empty. Note: This plugin also requires the Contextual Security plugin. Providing access to external users You can grant external users access to tables by creating a set of ACLs for the table. Another approach you can take is to give all external users access to all tables, and then restrict access to specific tables. You can do this by adding the snc_external role to the * ACL that is of Type ui_page. The hasRoles() method The hasRoles() method is still available, but is deprecated in the Geneva release. Use the hasRole(role name) method instead. If you do use the hasRoles() method, note these changes: This method automatically excludes the default snc_internal role when it checks for roles. This means that if a user has only the snc_internal role, the hasRoles() method still returns false. If the user has the snc_external role, the method returns false because the instance considers external users to be without a role.
https://docs.servicenow.com/bundle/istanbul-customer-service-management/page/administer/contextual-security/concept/c_InternalAndExternalUsers.html
2018-06-18T03:29:31
CC-MAIN-2018-26
1529267860041.64
[]
docs.servicenow.com
Experience Builder¶ Experience Builder allows you to track your website visitors’ behavior and build content experiences tailored to those visitors’ interests. Instead of having to use a separate application with its own interface and API to target content to your visitors, Experience Builder integrates tightly with Drupal, allowing you to leverage the Drupal skills and knowledge you’ve developed from maintaining your website. Regardless of the CMS in use by your website, you can install and use Experience Builder to meet your website personalization needs.
https://docs.acquia.com/en/stable/lift/exp-builder/
2018-06-18T03:37:18
CC-MAIN-2018-26
1529267860041.64
[]
docs.acquia.com
Define iteration paths (aka sprints) VSTS | TFS 2018 | TFS 2017 | TFS 2015 | TFS 2013 Newly created team projects contain a single, root area that corresponds to the team project name. Team projects typically specify a predefined set of iterations to help you get started tracking your work. All you need to do is specify the dates. You add iteration paths under this root. To understand how the system uses area paths, see About area and iteration paths. Prerequisites - You must be a member of a team project. If you don't have a team project yet, create one in VSTS. If you haven't been added as a team member, get added now. - You must be a member of a team project. If you don't have a team project yet, create one in an on-premises TFS. If you haven't been added as a team member, get added now. To create or modify areas or iterations, you must either be a member of the Project Administrators group, or your Create and order child nodes, Delete this node, and Edit this node permissions must be set to Allow for the area or iteration node that you want to modify. If you aren't a project administrator, get added as one or have someone provide you with explicit permissions to Edit project-level information. For naming restrictions on area and iteration paths, see About areas and iterations, Naming restrictions. Open the administration context for the team project From the web portal, open the admin page for the team project. You define both areas and iterations from the Work hub of the team project admin context. From the user context, you open the admin context by clicking the gear icon. The tabs and pages available differ depending on which admin context you access. From the web portal for the team project context, click the gear icon.. If you're currently working from a team context, then hover over the and choose Project settings. - Open the Work hub. Add iterations and set iteration dates From the Iterations page, you can add and select the iterations that will be active for your team. You add iterations in the same way you add areas. For more information about working within a sprint cadence, see Schedule sprints. Open the Work, Iterations page for the team project context. For Scrum-based team projects, you'll see these set of sprints. If you need to select another team project, go to the Overview page for the collection (click the DefaultCollection link). Schedule the start and end dates for each sprint your teams will use. Click Set dates or choose to edit the iteration from the actions menu for the sprint. When you're finished, you'll have a set of sprints scheduled - like this: Your next step is to choose the sprints each team will use. Open the Iterations tab for the team project context. For Scrum-based team projects, you'll see these set of sprints. You can change the name, location within the tree hierarchy, or set dates for any sprint. Simply open it (double-click or press Enter key) and specify the info you want. Schedule the start and end dates for those sprints you plan to use. After you set the start and end dates for one iteration, the calendar tool automatically attempts to set the next set of dates, based on the same iteration length you specified for the first. For example, if you set a three week sprint for Sprint 1, then when you select the start date for Sprint 2, the calendar tool automatically determines the start and end dates based on the next three weeks. You can accept or change these dates. To add another sprint, select New child and name it what you want. Here, we call it Sprint 7. Your next step is to select the sprints each team will use. Rename or delete an iteration When you rename an iteration, or move the node within the tree hierarchy, the system will automatically update the work items and queries that reference the existing path or paths. When you delete an iteration node, the system automatically updates the existing work items with the node that you enter at the deletion prompt. Chart progress by area or iteration You can quickly generate queries to view the progress based on an iteration. As an example, you can visualize progress of work items assigned to sprints as shown in the following stacked bar chart. Related articles As you can see, areas and iterations play a major role in supporting Agile tools and managing work items. You can learn more about working with these fields from these topics:
https://docs.microsoft.com/en-us/vsts/work/customize/set-iteration-paths-sprints?view=vsts
2018-06-18T04:08:19
CC-MAIN-2018-26
1529267860041.64
[array(['_img/alm_cw_stackedbarchart.png?view=vsts', 'Stacked bar chart by area'], dtype=object) ]
docs.microsoft.com
- Find the virtual machine in the vSphere Web Client inventory. - To find a virtual machine, select a datacenter, folder, cluster, resource pool, or host. - Click the Related Objects tab and click Virtual Machines. - Right-click the virtual machine and click Edit Settings. - In the Virtual Hardware tab, expand the CPU section. - Select a hyperthreading mode for this virtual machine from the HT Sharing drop-down menu. - Click OK.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.resmgmt.doc/GUID-9FC94B43-FF06-48BD-AD19-D39719625BB9.html
2018-06-18T04:15:51
CC-MAIN-2018-26
1529267860041.64
[]
docs.vmware.com
a Virtual Serial Port Concentrator (vSPC) URI to connect a serial port over the network. Prerequisites Familiarize yourself with the different media types that the port can access, vSPC connections, and any conditions that might apply. See vSphere Virtual Machine Administration. To connect a serial port over a network, add a Firewall rule set. See vSphere Virtual Machine Administration. Required privilege: Power off the virtual machine. Procedure - Click Virtual Machines in the VMware Host Client inventory. - Right-click a virtual machine in the list and select Edit settings from the pop-up menu. - On the Virtual Hardware tab, select Add other device and select Serial Port. The Serial Port appears in the hardware list. - In the hardware list, expand the serial port and select the type of media port to access. - (Optional) Deselect Connect at power on if you do not want the parallel port device to connect when the virtual machine powers on. - Click Save..
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.html.hostclient.doc/GUID-32E0ACD8-0311-48CA-A97F-7ECD1CE0AD71.html
2018-06-18T04:02:38
CC-MAIN-2018-26
1529267860041.64
[]
docs.vmware.com
Capacity planning and performance tuning are in a sense the same thing: choosing system components, placement, and parameters based on a given performance criteria. The difference is in when they are carried out. Capacity planning is part of planning; it is generally done before any of the system has been assembled. Tuning, on the other hand, is done after the initial architecture is a fact, when you have some feedback on the performance of the architecture. Perfect capacity planning would eliminate the need for performance tuning. However, since it is impossible to predict exactly how your system will be used, you will find that planning can reduce the need to tune but cannot eliminate it. With Vespa it is possible to do benchmarking on a few nodes to infer the overall performance of the chosen system. This article primarily describes static sizing - refer to Proton maintenance jobs to learn the dynamic behavior of the search core. The basic element in the Vespa search architecture is a content node. A content node holds a partition, an index, for a fraction of the entire data corpus. A content cluster consists of a collection of content nodes. Partitioning is then used to create linear scalability in data size and query rate. Vespa executes a query in two phases (Or more if you are using grouping features). During the first-phase, search/content nodes (proton) match and rank documents. The topmost documents are returned upwards to the TLD (Top Level Dispatcher) and the search Container for merging and potentially blending when multiple search clusters are involved. When the best ranking documents have been found, the second-phase is to fetch the summary data for those hits (snippets, bolding etc). The figure below illustrates how the query travels down to all the search/content nodes: On an idle system without feeding, we have the following formulas for the proton portion of the idle response time (Rproton,idle) and the response time on an idle system (Ridle). The average response time for a query on a loaded system can be approximately derived from the response time for the same query on a idle system and the system utilization using the following formulas: vespa-proton-binis running. Due to increased rate of CPU cache misses and TLB misses when the number of threads goes up, the above formula is just a rough approximation (ballpark estimate). To limit this issue, vespa-protonhas a cap on the number of threads performing query evaluation. By pushing the utilization to 100% during system benchmarking on a single node, we get an upper throughput measurement of when the node becomes overloaded. It is important to also measure behaviour under non-ideal circumstances to avoid getting too good results, since this could result in an undersized installation that breaks down when redistributing documents after some nodes have gone down or up while feeding data and serving queries. Thus a second measurement should be performed on a small content cluster (e.g. 4 nodes) with active feeding and one of the nodes being taken down during the benchmark, then later brought up again. The higher utilization a system has in production, the more fragile it is. If utilization is 80% then a relative traffic spike of 25% will cause an outage, and even a small relative traffic spike of 10% will double the response time, likely breaking the SLA. The situation is even worse if the retry rate for failed queries increases, either due to frontend logic or users. The target system utilization should be kept sufficiently low for the response times to be reasonable and within SLA even with some extra traffic occurring at peak hours. Each search/content node holds a partition of the index. Each search/content node queries its partition of the index upon a request from a dispatcher. A dispatcher is responsible for sending an incoming query in parallel to a set of search/content nodes so that all partitions are asked (full coverage). The results are then merged, and a complete result from the entire cluster is generated. The time to search the index is dependent on the number of nodes in the content cluster, since this determines the number of documents per node, which directly relates to DQC in the formulas above. The Vespa search kernel has performance characteristics where, within certain boundaries, the latency can be considered to scale linearly with the number of documents indexed on each node. See the graph below, where we have plotted the 99 percentile latency of a fixed set of queries executed against an increasing number of documents indexed. Fetching document summaries are normally a small cost compared to actually searching. Default configuration for this phase is focused on low resource usage. However if you want faster fetching of summaries you can enable the cache in the backend. Then you will use some more memory, but will get reduced latency/higher throughput. Enabling summary cache is done by tuning maxsize. Note that this is tuning setting affects all document types. I.e. if the cluster has 10 document types, the total is 10 caches each of size 1G amounting to 10G. Also note that in most cases you will not save any disk IO as the summary data are normally cached by the OS is the file system caches. But you will save CPU as the summaries by default are compressed in 65k chunks. With normal data each chunk is compressed down to 10-15k. Accessing 1 document will require decompressing 1 chunk. We will only cache the document you access. It is possible to improve latency of queries where the dynamic query cost is high. This portion of the cost can be split over multiple threads. This will reduce latency as long as you have free cpu cores available. By default this number is 1 as that gives the best resource usage measured as cpu/query. This can be overridden by a config override in services.xml. Remember to benchmark both throughput and latency when changing this. You control this setting globally by tuning persearch. This can be overridden to a lower value in the rank profiles. It is important to decide on a latency service level agreement (SLA) before sizing the Vespa search architecture for your application and query features. An SLA is often given as a percentiles at a certain traffic level, often peak traffic. There are other factors that will limit the amount of documents that can be indexed per node, such as the memory footprint, indexing latency and disk space. It is also very important to make sure that these numbers were obtained while keeping the query features fixed. Changing the way you query Vespa, for instance asking for more hits (document summaries) or changing the ranking expression, might have a severe impact on the latency and hence also the sizing. If we are going to size the application for 100M documents, we'll need 100M/20M = 5 nodes for the 400 ms case and 100M/10M = 10 nodes for the 200 ms case. An SLA per query class, and not as an overall requirement is a nice way to get an even better utilization of the system resources. For instance, Direct Display traffic might have a stricter SLA then costly navigational queries involving more features like grouping etc. Adding content nodes to a content cluster reduces the dynamic query cost per node (DQC) but does not reduce the static query cost (SQC). When SQC is no longer significantly less than DQC, the gain of adding more nodes is diminished, and the corresponding increase in dispatch cost (DQ) will eventually outweigh the reduction in DQC as the number of nodes in the content cluster becomes very large. Disk usage of a search/content node increases as the document volume increases: Note that maintenance jobs temporarily increases disk usage. index fusion is an example, where new index files are written, causing an increase in used disk space while running. Space used depends on configuration and data - headroom must include the temporal usage. The memory usage on a search/content node increases as the document volume increases (0 - 10M documents, same as the disk usage graph above): Do not let the virtual memory usage of proton scare you. It is important to monitor the resident memory usage. It is not uncommon to see proton use 22G of virtual memory but only 7G resident as seen above. If the node had 16G of physical memory, we could probably fit 20M documents without running the risk of heavy swap activity. Do note that in a default vespa setup there will be multiple other processes running along with the vespa-proton process that will also consume memory. The memory usage per document depends on the number of fields, the size of the document and how many of the fields are defined as attributes. When the static query cost (SQC) or the dispatch cost (DC) becomes significant, then scaling by replication should be considered. It is supported for content clusters, using hierarchical distribution. Take a look at QPS Scaling in an Indexed Content Cluster for more information on how to use hierarchical distribution. When data is replicated such that only a portion of the nodes are needed by the dispatcher to get full coverage, then both the average static query cost (SQC) and the dispatch cost (DC) are significantly reduced. But the feed cost (FC), disk space requirement, and maintenance cost is also significantly higher, compared to just using partitioning. Except under very special circumstances, getting sufficient redundancy is the only reason for scaling throughput by replication. When you have decided the latency SLA and benchmarked latencies versus document volume, one can start to look at how many groups that will be needed in the setup, given the estimated traffic (queries/s or QPS). Finding where the "hockey stick" or break point in the response times occurs is extremely important in order to make sure that the Vespa system deployed is well below this threshold and that it has capacity to absorb load increases over time as well as having sufficient capacity to sustain node outages during peak traffic. At some throughput level, some resource(s) in the system will be saturated and requests will be queued up causing latency to spike up exponentially since requests are arriving faster then they they are able to be served. The more queries executed past the saturation point, the longer an average request needs to be in the queue waiting to be served. This is demonstrated in the graph below for the 10M documents per node case: If we had chosen 20M documents per node, the break down point would be at approximately half of what is observed in the above graph (30 QPS). We should identify which resource is the bottleneck (container, search kernel, CPU, disk?). Potentially we can go through a set of optimizations to make the application have a higher break down point (and lower latency). But for now we assume that 60 QPS is the best we can do with 10M docs per node. It is given in the SLA that our application needs to support 100 QPS at peak traffic: If we had gone for more documents per node (20M, meeting the < 400 ms latency), a single group would be able to handle 30 qps (double the document volume, double the latency which gives half the supported qps). We would then have needed 5 groups to support 100 qps estimated traffic (5 * 30 = 150, losing a single node would give us 120 qps). In total 5 * 5 = 25 search nodes. As we can see, it's going to be more cost effective if we can relax our latency requirement and store more documents per node, so it always important to try to optimize the number of documents that a node can hold to reduce the number of columns. A Vespa search/content node typically runs the search process, vespa-proton-bin. It is possible to configure multiple instances per physical node. The situation is a bit complex, since a physical node going down will be equivalent to multiple logical nodes going down at the same time. By using hierarchical distribution, it is possible to tweak the distribution of bucket replicas in such a way that the system should still handle n - 1 physical nodes going down on a content cluster where redundancy has been set to n. The search process has the following roles: As we have seen the previous sections, it is important to establish a latency SLA and that latency is a linear function of how many documents is indexed given a fixed set of query features. The question is still, how many documents can a node hold when we also start considering other factors that limits the number of documents that a node can hold, such as memory, disk space, and indexing latency. Attribute data and the memory index could potentially use a lot of memory, so it is important that we understand the memory footprint of our application. If a node starts running low on memory, a lot of bad things will happen. A standard search/content node configuration comes with 8-48G of RAM. Many Vespa applications do not need that amount of memory per node. Perhaps the latency SLA is so strict that CPU is the limitation? Maybe 8G is enough, since the amount of documents/latency restricts the number of docs we can place per node. The simplest way to determine the memory footprint on the search/content node, is to feed a random sample of your document collection and view the stable persistent memory size of the process in a monitoring tool. Note, however, that the memory usage will increase during query load, as dictionaries and document summaries are memory mapped by default. Thus, you will usually see high virtual memory use, but a lower resident memory use. On a search/content node, the search process vespa-proton-bin is the main user of memory. The amount of memory used by the search process is dominated by the memory index and the attributes. New documents are indexed in the memory index, which enables sub-second indexing latency. The memory index is later flushed to the disk index whenever the memory usage goes beyond a certain limit, or a certain time interval has passed. The default settings should be good enough for most setups, but it can be necessary to adjust these settings depending on the application. The more memory you allow for the memory index to use, the more documents can be kept in memory before they must be flushed to disk. This affects feeding performance, as the number of fusions between the memory and disk index is inversely proportional with the number of documents in the memory index. The various tuneables for the memory index can be found in the reference doc. Attribute data is the second main user of memory in the search process. Attributes usually have a fixed cost size, but multi-value attributes may have variable size per document, and string attributes consume some space for each unique value. See the attribute sizing guide for sizing of attribute data. The core data structures powering Vespa search are built on inverted indexes. The number of I/O operations per query is given by the following equation: $$\text{io-seeks}_{\text{node}} = 2n + \frac{h}{H}$$ Where n is the number of query/filter terms and h is the number of document summary hits for display. H is the total number of nodes in the search cluster. Note that this formula gives the worst case scenario, and does not take the OS cache nor application cache into account. Observations: Refer to the match-phase reference. Match-phase works by specifying an attribute that measures document quality in some way (popularity, click-through rate, pagerank, bid value, price, text quality). In addition a max-hits value is specified that specifies how many hits are "more than enough" for your application. Then an estimate is made after collecting a reasonable amount of hits for the query, and if the estimate is higher than the configured max-hits value, an extra limitation is added to the query, ensuring that only the highest quality documents can become hits. In effect this limits the documents actually searched to the highest quality documents, a subset of the full corpus, where the size of subset is calculated in such a way that the query is estimated to give max-hits hits. Since some (low-quality) hits will already have been collected to do the estimation, the actual number of hits returned will usually be higher than max-hits. But since the distribution of documents isn't perfectly smooth, you risk sometimes getting less than the configured max-hits hits back. Note that limiting hits in the match-phase also affects aggregation, grouping, and total-hit-count since it actually limits so the query gets less hits. Also note that it doesn't really make sense to use this feature together with a WAND operator that also limit your hits, since they both operate in the same manner and you would get interference between them that could cause very unpredictable results. The graph shows possible hits versus actual hits in a corpus with 100 000 documents, where max-hits is configured to 10 000. The corpus is a synthetic (slightly randomized) data set, in practice the graph will be less smooth: There is a searchnode metric per rank-profile named limitedqueries to see how many of your queries are actually affected by these settings; compare with the corresponding queries metric to measure the percentage. There are some very important things to consider before using match-phase. In a normal search scenario latency is directly proportional to the number of hits the query matches: a query that matches few documents will have low latency and a query that matches many documents will have high latency. Match-phase has the opposite effect. This means that if you have queries that matches few documents, match-phase might make these queries significantly slower, hence it might actually be faster to run the query without the filter. Example: Lets say you have a corpus with a document attribute named created_time. For all queries you want the newest content surfaced so you enable match-phase on created_time. So far, so good. You get a great latency and always get your top-k hits. The problem might come if you introduce a filter. If you have a filter saying you only want documents from the last day then match-phase can become sub-optimal and in some cases much worse than not running with match-phase. By design Vespa will evaluate potential matches for a query by the order of their internal documentid. This means it will start evaluating documents in the order they was indexed on the node, and for most use-cases that means the oldest documents first. Without a filter, every document is a potential match and match-phase will quickly figure out how it can optimize. With the filter, on the other hand, the algorithm need to evaluate almost all the corpus before it reaches potential matches (1 day old corpus), and because of the way the algorithm is implemented, end up with doing a lot of unnecessary work and can have orders of magnitude higher latencies than running the query without the filter. Another important thing to mention is that the reported total-hits will be different when doing queries with match-phase enabled. This is because match-phase works on an estimated "virtual" corpus, which might have much fewer hits that is actually in the full corpus. If used correctly match-phase can be a life saver, but as you understand, it is not a straight forward fix-it-all silver bullet. Please test and measure your use of match-phase, and contact the Vespa team if your results are not what you expect. In general, the container follows the classic "hockey stick" for latency when overloaded. When overloaded, a sharp increase in latency is followed by a cascade of effects which each contribute to a worsening of the situation. For search applications, the container has some heuristics to try and detect a breakdown state. First of all, if a node is extremely overloaded, nothing will run efficiently at all. If the container thinks this is the case, it will log a thread dump, starting with the following warning in the log: A watchdog meant to run 10 times a second has not been invoked for 5 seconds. This usually means this machine is swapping or otherwise severely overloaded. The shutdown watchdog may deactivated by editing the bropt variable in the vespa-start-container-daemon script, or by adding the correct system property ( -Dvespa.freezedetector.disable=true) to the JVM startup arguments. If a large proportion of search queries start timing out, and there is enough traffic that it is not obvious it is a pure testing scenario, the container will log the following warning message: Too many queries timed out. Assuming container is in breakdown. A search plug-in may inspect whether the container thinks the system is in breakdown, using a the Execution.Context object in the Execution instance passed by the search method. Please refer to the JavaDoc for com.yahoo.search.searchchain.Execution.Context.getBreakdown(). When breakdown is assumed, a certain amount of queries will start emitting detailed diagnostics in the log ( com.yahoo.search.searchchain.Execution.Context.getDetailedDiagnostics()). The messages will look something like this: 1304337308.016 myhost.mydomain.com 20344/17 container Container.com.yahoo.search.handler.SearchHandler warning Time use per searcher: com.yahoo.search.querytransform.NGramSearcher@music(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.search.querytransform.DefaultPositionSearcher@music(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.search.grouping.GroupingValidator@music(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.search.grouping.vespa.GroupingExecutor@music(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.querytransform.LiteralBoostSearcher@music(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.querytransform.CJKSearcher@music(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.querytransform.StemmingSearcher@music(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.querytransform.NormalizingSearcher@music(QueryProcessing(SEARCH: 1 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.searcher.ValidateSortingSearcher@music(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.cluster.ClusterSearcher@music(QueryProcessing(SEARCH: 0 ms), ResultProcessing()), com.yahoo.search.SlowSearcher@default(QueryProcessing(SEARCH: 12 ms, FILL: 0 ms), ResultProcessing(SEARCH: 18 ms)), com.yahoo.prelude.statistics.StatisticsSearcher@native(QueryProcessing(SEARCH: 1 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.search.grouping.GroupingQueryParser@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.searcher.JuniperSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.searcher.FieldCollapsingSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.querytransform.RecallSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.querytransform.PhrasingSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.semantics.SemanticSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.searcher.PosSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.querytransform.IndexCombinatorSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.grouping.legacy.GroupingSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.grouping.legacy.AggregatingSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.searcher.BlendingSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), com.yahoo.prelude.querytransform.VietnamesePhrasingSearcher@vespa(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 0 ms)), federation@native(QueryProcessing(SEARCH: 0 ms), ResultProcessing(SEARCH: 1 ms)). This message gives time spent for each searcher, in each search for each type of activity (basically searching versus filling hits). The data above are from a test where a SlowSearcher class was inserted to provoke extra timing information: com.yahoo.search.SlowSearcher@default(QueryProcessing(SEARCH: 12 ms, FILL: 0 ms), ResultProcessing(SEARCH: 18 ms)) In other words, the class named SlowSearcher, in the search chain named default, spent 12 ms while query processing in the search phase, and somewhere less than a ms while invoked in the fill phase. No time was added for its result processing in the fill phase, but it was measured to use 18 ms for result processing in the search.
https://docs.vespa.ai/documentation/performance/sizing-search.html
2018-06-18T03:57:57
CC-MAIN-2018-26
1529267860041.64
[]
docs.vespa.ai
The body elements support the most common types of content authoring for topics: paragraphs, lists, phrases, figures, and other common types of exhibits in a document. <alt>element provides alternate text for an image. It is equivalent to the @altattribute on the <image>element; since the @altattribute is deprecated, use the <alt>element instead. The <alt>element can be more easily edited or translated than the @altattribute. <cite>element is used when you need a bibliographic citation. It specifically identifies the title of the resource. <dd>element contains the description of a term in a definition list entry ( <dlentry>). <desc>element contains the description of the current element. <ddhd>element provides an optional heading or title for a column of descriptions or definitions in a definition list ( <dl>). <div>element is used to organize subsets of content into logical groups that are not intended to be or should not be contained as a topic. >element groups a single entry in a definition list ( <dl>). The <dlentry>element includes a term ( <dt>) and one or more definitions or descriptions ( <dd>) of that term. <dlhead>element contains optional headings for the term and description columns in a definition list ( <dl>). The definition list heading might contain a heading for the column of terms ( <dthd>) and a heading for the column of descriptions ( <ddhd>). <dt>element contains a term in a definition list entry ( <dlentry>). attributes. <dthd>element provides an optional heading for the column of terms in a definition list ( <dl>). <example>element is a section that contains examples that illustrate or support the current topic. >element is a footnote used to annotate text with notes that are inappropriate for inline inclusion. It is also used to indicate the source for facts or other material used in the text. <image>element to include artwork or images in a DITA topic. <keyword>element identifies a keyword or token, such as a single value from an enumerated list, the name of a command or parameter, product name, or a lookup key for a message. >element supports a reference to a text description of the graphic or object. This element replaces the deprecated @longdescrefattribute on <image>and <object>elements. <longquoteref>element provides a reference to the source of a long quote. <lq>is used to provide extended content quoted from another source. Use the quote element <q>for short, inline quotations, and long quote <lq>for quotations that are too long for inline use, following normal guidelines for quoting other sources. The @hrefand @keyrefattributes are available to specify the source of the quotation. The <longquoteref>element is available for more complex references to the source of a quote. <note>element contains information that expands on or calls attention to a particular point. This information is typically differentiated from the main text. <object>element corresponds to the HTML <object>element, and attribute semantics derive from their HTML definitions. For example, the @typeattribute differs from the @typeattribute on many other DITA elements. <ol>element includes a list of items sorted by sequence or order of importance. <p>element is a single paragraph containing a single main idea. attribute differs from the @typeattribute on many other DITA elements. >element contains text for which all line breaks and spaces are preserved. It is typically presented in a monospaced font. Do not use <pre>when a more semantically specific element is appropriate, such as <codeblock>. <q>element includes content quoted from another source. This element is used for short quotes that are displayed inline. Use the long quote element (<lq>) for quotations that should be set off from the surrounding text or that contain multiple paragraphs. >element contains a simple list of items of short, phrase-like content, such as a list of materials in a kit or package. >element identifies words that might have or require extended definitions or explanations. <text>element associates no semantics with its content. It exists to serve as a container for text where a container is needed (for example, as a target for content references, or for use within restricted content models in specializations). <tm>element identifies a term or phrase that is trademarked. Trademarks include registered trademarks, service marks, slogans, and logos. <ul>element is a list of items in which the order of list items is not significant. List items are typically styled on output with a "bullet" character, depending on nesting level. <xref>element to provide an inline cross reference. It is commonly used to link to a different location within the current topic, a different topic, a specific location in another topic, or an external resource. The target of the cross-reference is specified using the @hrefor @keyrefattributes.
http://docs.oasis-open.org/dita/dita/v1.3/errata01/os/complete/part3-all-inclusive/langRef/containers/body-elements.html
2018-06-18T03:56:16
CC-MAIN-2018-26
1529267860041.64
[]
docs.oasis-open.org
Step 3: Invoke the Lambda Function (AWS CLI) In this section, you invoke your Lambda function manually using the invoke AWS CLI command. $ aws lambda invoke \ --invocation-type RequestResponse \ --function-name helloworld \ --region region\ --log-type Tail \ --payload '{"key1":"value1", "key2":"value2", "key3":"value3"}' \ --profile adminuser \ outputfile.txt If you want you can save the payload to a file (for example, input.txt) and provide the file name as a parameter. --payload \ The preceding invoke command specifies RequestResponse as the invocation type, which returns a response immediately in response to the function execution. Alternatively, you can specify Event as the invocation type to invoke the function asynchronously. By specifying the --log-type parameter, the command also requests the tail end of the log produced by the function. The log data in the response is base64-encoded as shown in the following example response: { "LogResult": " base64-encoded-log", "StatusCode": 200 } On Linux and Mac, you can use the base64 command to decode the log. $ echo base64-encoded-log| base64 --decode The following is a decoded version of an example log. START RequestId: 16d25499-d89f-11e4-9e64-5d70fce44801 2015-04-01T18:44:12.323Z 16d25499-d89f-11e4-9e64-5d70fce44801 value1 = value1 2015-04-01T18:44:12.323Z 16d25499-d89f-11e4-9e64-5d70fce44801 value2 = value2 2015-04-01T18:44:12.323Z 16d25499-d89f-11e4-9e64-5d70fce44801 value3 = value3 2015-04-01T18:44:12.323Z 16d25499-d89f-11e4-9e64-5d70fce44801 result: "value1" END RequestId: 16d25499-d89f-11e4-9e64-5d70fce44801 REPORT RequestId: 16d25499-d89f-11e4-9e64-5d70fce44801 Duration: 13.35 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 9 MB For more information, see Invoke. Because you invoked the function using the RequestResponse invocation type, the function executes and returns the object you passed to the context.succeed() in real time when it is called. In this tutorial, you see the following text written to the outputfile.txt you specified in the CLI command: "value1" Note You are able to execute this function because you are using the same AWS account to create and invoke the Lambda function. However, if you want to grant cross-account permissions to another AWS account or grant permissions to another an AWS service to execute the function, you must add a permissions to the access permissions policy associated with the function. The Amazon S3 tutorial, which uses Amazon S3 as the event source (see Tutorial: Using AWS Lambda with Amazon S3), grants such permissions to Amazon S3 to invoke the function. You can monitor the activity of your Lambda function in the AWS Lambda console. Sign in to the AWS Management Console and open the AWS Lambda console at. The AWS Lambda console shows a graphical representation of some of the CloudWatch metrics in the Cloudwatch Metrics at a glance section for your function. For each graph, you can also choose the logs link to view the CloudWatch logs directly. Next Step Step 4: Try More CLI Commands (AWS CLI)
https://docs.aws.amazon.com/lambda/latest/dg/with-userapp-walkthrough-custom-events-invoke.html
2018-06-18T04:03:13
CC-MAIN-2018-26
1529267860041.64
[]
docs.aws.amazon.com
Getting started¶ Hasura helps you build applications quickly. Hasura provides APIs for common uses cases (data, auth, filestore) and allows you to build your custom microservices easily too. This getting started guide will help you grok Hasura and will get you off the ground with your first running application in a few minutes. There are 3 core concepts that drive everything you do with Hasura. Hasura projects, Hasura clusters and deploying your project to the cluster. The hasura CLI tool is required to run manage everything Hasura. Concept #1: A hasura project¶ Hasura breaks your entire application into a collection of microservices. There are few ready-made microservices which give you instant backend APIs you can use in your app directly, like data, auth and filestore. The data and auth microservice are backed by Postgres. As you can imagine, there are various configurations for these microservices and schema information for the data models you create for your application. Apart from this, your application will probably have custom microservices too, with source code and configuration specifications written by you. Concept #2: A hasura cluster¶ A Hasura cluster is a Kubernetes cluster on the cloud that can host any Hasura project. It has all the Hasura microservices running and the necessary tooling for you to deploy your Hasura project. Concept #3: Deploying to the hasura cluster¶ Once you have a Hasura cluster that is added to your Hasura project, running git push hasura master will deploy your Hasura project. Your configurations, database schema, and your microservices will all be deployed in a single go. Note, you can also deploy selected changes only with some other advanced commands. You’re ready to start!¶ Here’s a list of your first steps: - Install the hasuraCLI - Quickstart (clone a hasura project + create a free cluster) hasura quickstart hasura/hello-world. - You can replace hasura/hello-wordwith any project from hasura.io/hub - Deploy your project or changes you make to the cluster git push hasura master - Open the API console to manage your data models and test your APIs hasura api-console The hello-world project contains a sample blog-engine schema and a custom nodejs microservice and will guide you through the basics of using a Hasura project. Deep dive¶ Alternatively, if you’d like to get a deep understanding of how Hasura works, head to The complete tutorial.
https://docs.hasura.io/0.15/manual/getting-started/index.html
2018-06-18T03:25:20
CC-MAIN-2018-26
1529267860041.64
[array(['../../_images/core-hasura-concepts.png', '../../_images/core-hasura-concepts.png'], dtype=object) array(['../../_images/hasura-project-structure.png', '../../_images/hasura-project-structure.png'], dtype=object) array(['../../_images/hasura-cluster.png', '../../_images/hasura-cluster.png'], dtype=object)]
docs.hasura.io
Monitoring and collecting performance data from your servers The Resources plugin gathers information about the server resource usage: CPU, Memory, Network, Disk, Filesystem, OS. How it works The plugin gathers OS and hardware metrics from the underlying system. No configuration is required. Below you can find the metrics list for Linux and Windows. Installation The plugin needs to be installed together with a CoScale agent, instructions on how to install the CoScale agent can be found here. If you want to monitor your applications inside Docker containers using CoScale, check out the instructions here.
http://docs.coscale.com/agent/plugins/resources/
2018-06-18T03:59:39
CC-MAIN-2018-26
1529267860041.64
[]
docs.coscale.com
An SCP item group is an overall entity for items with similar characteristics, such as product structure, lead time and so on. SCP item groups are used as an aggregated level for items that are to be used within supply chain optimization using M3 Supply Chain Planner. An SCP item group can consist of item numbers or SCP item groups on lower levels (range).
https://docs.infor.com/help_m3beud_16.x/topic/com.infor.help.scplanhs_16.x/c001671.html
2020-03-28T18:38:36
CC-MAIN-2020-16
1585370492125.18
[]
docs.infor.com
Built in global helper functions were removed by default in v2.1 though they are not deprecated and you can use them as you wish. Masonite works on getting rid of all those mundane tasks that developers either dread writing or dread writing over and over again. Because of this, Masonite has several helper functions that allows you to quickly write the code you want to write without worrying about imports or retrieving things from the Service Container. Many things inside the Service Container are simply retrieved using several functions that Masonite sets as builtin functions which we call "Built in Helper Functions" which you may see them referred to as. These functions do not require any imports and are simply just available which is similiar to the print() function. These functions are all set inside the HelpersProvider Service Provider. You can continue to use these helper functions as much as you like but most developers use these to quickly mock things up and then come back to refactor later. It may make more sense if we take a peak at this Service Provider: masonite.providers.HelpersProviderclass HelpersProvider(ServiceProvider):wsgi = Falsedef register(self):passdef boot(self, view: View, request: Request):''' Add helper functions to Masonite '''builtins.view = view.renderbuiltins.request = request.helperbuiltins.auth = request.userbuiltins.container = self.app.helperbuiltins.env = os.getenvbuiltins.resolve = self.app.resolveview.share({'request': request.helper, 'auth': request.user}) Notice how we simply just add builtin functions via this provider. The below list of helpers are "builtin" helpers meaning they are global in the same way that the The Request class has a simple request() helper function. def show(self):request().input('id') is exactly the same as: def show(self, request: Request):request.input('id') Notice we didn't import anything at the top of the file, nor did we inject anything from the Service Container. The view() function is just a shortcut to the View class. def show(self):return view('template_name') is exactly the same as: def show(self, view: View):return view.render('template_name') Instead of resolving the mail class you can use the mail helper: def show(self):mail_helper().to(..) is exactly the same as: from masonite import Maildef show(self, mail: Mail):mail.to(..) The auth() function is a shortcut around getting the current user. We can retrieve the user like so: def show(self):auth().id is exactly the same as: def show(self, request: Request):request.user().id This will return None if there is no user so in a real world application this may look something like: def show(self):if auth():auth().id This is because you can't call the .id attribute on None We can get the container by using the container() function def show(self):container().make('User') is exactly the same as: def show(self, request: Request):request.app().make('User') We may need to get some environment variables inside our controller or other parts of our application. For this we can use the env() function. def show(self):env('S3_SECRET', 'default') is exactly the same as: import osdef show(self):os.environ.get('S3_SECRET', 'default') We can resolve anything from the container by using this resolve() function. def some_function(request: Request):print(request)def show(self):resolve(some_function) is exactly the same as: def some_function(request: Request):print(request)def show(self, request: Request):request.app().resolve(some_function) That's it! These are simply just functions that are added to Python's builtin functions. Die and dump is a common way to debug objects in PHP and other programming languages. Laravel has the concept of dd() which dies and dumps the object you need to inspect. dd() is essentially adding a break point in your code which dumps the properties of an object to your browser. For example we can die and dump the user we find: from app.User import Userdef show(self):dd(User.find(7)) If we then go to the browser and visit this URL as normal then we can now see the object fully inspected which will kill the script wherever it is in place and throw an exception but instead of showing the normal debugger it will use a custom exception handler and show the inspection of the object instead: There are several helper methods that require you to import them in order to use them. These helpers are not global like the previous helpers. The config helper is used to get values in the config directory. For example in order to get the location in the config/storage.py file for example. This function can be used to retrieve values from any configuration file but we will use the config/storage.py file as an example. With a config/storage.py file like this: config/storage.pyDRIVERS = {'s3': {'client': 'Hgd8s...''secret': 'J8shk...''location': {'west': '..''east': '..'}}} We can get the value of the west key in the location inner dictionary like so: from masonite.helpers import configdef show(self):west = config('storage.drivers.s3.location.west') Instead of importing the dictionary itself: from config import storagedef show(self):west = storage.DRIVERS['s3']['location']['west'] Note the use of the lowercase storage.drivers.s3 instead of storage.DRIVERS.s3. Either or would work because the config function is uppercase and lowercase insensitive. This helper that allows you to wrap any object in this helper and call attributes or methods on it even if they don't exist. If they exist then it will return the method, if it doesn't exist it will return None. Take this example where we would normally write: def show(self):user = User.find(1)if user and user.id == 5:# do code... We can now use this code snippet instead: def show(self):if optional(User.find(1)).id == 5:# do code... Compact is a really nice helper that allows you to stop making those really repetitive dictionary statements in your controller methods take this for example: def show(self, view: View):posts = Post.all()users = User.all()articles = Articles.all()return view.render('some.template', {'posts': posts, 'users': users, 'articles': articles}) Notice how our Python variables are exactly the same as what we want our variables to be in our template. With the compact function, now you can do: from masonite.helpers import compactdef show(self, view: View):posts = Post.all()users = User.all()articles = Articles.all()return view.render('some.template', compact(posts, users, articles)) You can also pass in a dictionary which will update accordingly: from masonite.helpers import compactdef show(self, view: View):posts = Post.all()users = User.all()user_blogs = Blog.where('user_id', 1).get()return view.render('some.template', compact(posts, users, {'blogs': user_blogs})) You can use the same Collection class that orator uses when returning model collections. This can be used like so: from masonite.helpers import collectdef show(self):collection = collect([1,2,3])if collection.first() == 1:# do action You have access to all the methods on a normal collection object.
https://docs.masoniteproject.com/v/v2.2/the-basics/helper-functions
2020-03-28T18:57:03
CC-MAIN-2020-16
1585370492125.18
[]
docs.masoniteproject.com
Overview KNIME Server executes workflows, that may try to access Kerberos-secured services such as Apache Hive™, Apache Impala™ and Apache Hadoop® HDFS™. This guide describes how to configure KNIME Server so that it can authenticate itself against Kerberos and then impersonate its own users towards Kerberos-secured cluster services. What is user impersonation? With user impersonation, it does not matter whether a user runs a workflow in KNIME Analytics Platform or on KNIME Server. In both cases, all operations on the cluster will be performed as that particular user and the same permissions and authorization rules apply. This has the following advantages: Workflows that access a secured cluster run without modifications on KNIME Server. Authorization to access cluster resources (Hive tables, HDFS files, …) is administered with the usual mechanisms, e.g. Apache Sentry™ or Apache Ranger™. How does user impersonation work? Let us assume that a user Jane runs a workflow on KNIME Server. The workflow is supposed to run a Hive query. The following sequence of events now takes place: She starts a workflow that connects to Hive. This workflow is now executed on KNIME Server, not Jane’s machine. When the Hive Connector node in the workflow is executed, KNIME Server first checks for a TGT (ticket granting ticket) in its own ticket cache. If there is no TGT, it reads the krb5.confconfiguration file, connects to the KDC and authenticates itself. Instead of Jane’s credentials, it uses the credentials configured on KNIME Server, i.e. a service principal such as knimeserver/<host>@REALMand a keytab file. The TGT will be stored in an in-memory ticket cache. To make a JDBC connection to Hive, the Hive JDBC driver on KNIME Server still requires an ST (service ticket), which it now requests from the KDC. The ST is only valid for connections between KNIME Server and the Hive instance. Now, the Hive JDBC driver opens a connection to Hive and authenticates itself with the ST as knimeserver/<host>@REALM. Since the workflow was started by Jane, the JDBC driver tells Hive, that all operations shall be performed as user Jane. Hive consults the Hadoop core-site.xmlto verify that KNIME Server is indeed allowed to impersonate Jane. If not, it will return an error. Now, the workflow submits an SQL query via the JDBC connection. The query is executed on the cluster as user Jane. Hive checks whether user Jane has the necessary permissions to run the query. It employs its usual permission checking mechanism, e.g. Apache Sentry™ or Apache Ranger™. The query will succeed or fail, depending on whether Jane has the necessary permissions. Prerequisites Setting up KNIME Server for Kerberos authentication and user impersonation has the following prerequisites. For Kerberos: An existing Kerberos KDC such as MIT Kerberos or Microsoft ActiveDirectory A service principal for KNIME Server. The recommended format is knimeserver/<host>@<REALM>, where <host>is the fully-qualified domain name of the machine where KNIME Server runs, <REALM>is the Kerberos realm. A keytab file for the KNIME Server service principal. A Kerberos client configuration file ( krb5.conf). The recommended way to obtain this file, is to copy the /etc/krb5.conffrom a node in the cluster. Alternatively, you can create the file yourself (see Creating your own krb5.conf). For the cluster: A Kerberos-secured cluster. An account with administrative privileges in the cluster management software, that can configure and restart cluster services. On Cloudera CDH this means a Cloudera Manager account, on Hortonworks HDP this means an Apache Ambari™ account. For KNIME Server: An existing KNIME Server installation. An account with administrative privileges on the machine where KNIME Server is installed. This accounts needs to be able to edit the KNIME Server configuration files and restart KNIME Server. Supported cluster services KNIME Server supports Kerberos authentication and user impersonation for connections to the following services: Apache Hive Apache Impala Apache Hadoop HDFS (including HttpFS) Apache Livy Setting up Kerberos authentication This section describes how to set up KNIME Server to authenticate itself against Kerberos. Kerberos client configuration (krb5.conf) The KNIME Server Executor needs to read the krb5.conf file during Kerberos authentication. Please append the following line to the knime.ini file of the KNIME Server Executor: -Djava.security.krb5.conf=<PATH>(1) For reference, Possible locations for krb5.conf describes the process by which KNIME Server Executor locates the krb5.conf file. Kerberos principal and keytab file KNIME Server Executor needs credentials during Kerberos authentication. Please specify the service principal and the keytab file by adding the following lines to the KNIME Server preferences.epf: /instance/org.knime.bigdata.commons/org.knime.bigdata.config.kerberos.user=<PRINCIPAL>(1) /instance/org.knime.bigdata.commons/org.knime.bigdata.config.kerberos.keytab.file=<PATH>(2) Setting up proprietary JDBC drivers (optional) The following subsections only apply if you choose to set up a proprietary JDBC driver for Hive/Impala. These drivers are vendor-specific. Please consult the subsection that applies to your Hadoop vendor: Cloudera JDBC drivers (Hive and Impala) Download the newest JDBC drivers from the Cloudera website (login required): Once downloaded, extract the contained Cloudera_HiveJDBC41_<version>.zip( Cloudera_ImpalaJDBC41_<version>.zip) file into an empty folder. Note that you need one folder for the Hive driver and another one for the Impala driver. Verify that the resulting two folders contain JAR files. It is recommended to place the folders) Hortonworks Hive JDBC drivers Download the newest Hive JDBC driver from the Hortonworks website. Locate the Hortonworks JDBC Driver for Apache Hive. Download the JDBC 4.1 driver. Once downloaded, extract the ZIP file into an empty folder. Verify that the folder now contains JAR files. It is recommended to place the folder) Setting up user impersonation This section describes how to set up both ends of user impersonation, which requires configuration on two sides: KNIME Server and the cluster. User impersonation on KNIME Server By default, KNIME Server tries to impersonate its users on Kerberos-secured connections towards the following cluster services: HDFS (including httpFS) Apache Livy Apache Hive Impersonation for HDFS and Apache Livy is done automatically and does not require any further setup. Connections to Apache Hive require further setup steps depending on the used JDBC driver as described below. To activate user impersonation for the embedded Apache Hive JDBC driver,=hive.server2.proxy.user\={1} This will append the hive.server2.proxy.user JDBC parameter to every JDBC connection made via the embedded JDBC driver. The placeholder {1} is automatically replaced by the login name of the KNIME Server user. To activate user impersonation for proprietary Simba based JDBC drivers, such as the ones provided by Cloudera and Hortonworks,=DelegationUID\={1} This will append the DelegationUID JDBC parameter to every JDBC connection made via the proprietary JDBC drivers. The placeholder {1} is automatically replaced by the login name of the KNIME Server user. Please check the driver documentation for the appropriate impersonation parameter if you are using any third party JDBC driver. User impersonation on Apache Hadoop™ and Apache Hive™ Apache Hadoop™ and Apache Hive™ consult the core-site.xml file to determine whether KNIME Server is allowed to impersonate users. Please add the following settings to the Hadoop core-site.xml on your cluster: <property> <name>hadoop.proxyuser.knimeserver.hosts</name>(1) <value>*</value> </property> <property> <name>hadoop.proxyuser.knimeserver.groups</name>(1) <value>*</value> </property> User impersonation on Apache Impala™ Apache Impala™ requires a configuration setting to determine whether KNIME Server is allowed to impersonate users. The required steps are similar to Configuring Impala Delegation for Hue. In Cloudera Manager, navigate to Impala > Configuration > Impala Daemon Command Line Argument Advanced Configuration Snippet (Safety Valve) and add the following line: -authorized_proxy_user_config='hue=*;knimeserver=*' Then click Save and restart all Impala daemons. Please note: This will make hueand knimeserverthe only services that can impersonate users in Impala. If other services should be allowed to do the same, they need to be included here as well. If you have created a service principal for KNIME Server other than knimeserver/<host>@<REALM>, then adjust the above setting accordingly. Advanced configuration This section offers supplementary material to address more advanced setups. Creating your own krb5.conf A minimal krb5.conf can look like this: [libdefaults] default_realm = MYCOMPANY.COM(1) [realms] MYCOMPANY.COM = { kdc = kdc.mycompany.com(2) } Adjust these values as appropriate for your setup. Depending on your setup, more configuration settings may be necessary. The krb5.conf format is fully described as part of the MIT Kerberos Documentation. Possible locations for krb5.conf Like any other Java program, KNIME Server Executor tries to read the krb5.conf file from a set of default locations. KNIME Executor tries the following locations in the given order: First it checks whether the java.security.krb5.confsystem property is set. If so, it will try to read the file from the location specified in this system property. The system property can be set as described in Kerberos client configuration (krb5.conf). Otherwise, it will try to read the krb5.conffrom the Java Runtime Environment of KNIME Server Executor: <knime-executor-installation>/plugins/org.knime.binary.jre.<version>/jre/lib/security/krb5.conf If this fails too, it will try the following operating system dependent locations: Windows: C:\Windows\krb5.ini Linux: /etc/krb5.conf You can place krb5.conf in any of the above locations. It is however recommended to set the java.security.krb5.conf system property in knime.ini. Deactivating user impersonation on KNIME Server By default, KNIME Server tries to impersonate its users on Kerberos-secured connections. To completely deactivate user impersonation (not recommended) on these connections, add the following line to the KNIME Server preferences.epf: /instance/org.knime.bigdata.commons/org.knime.bigdata.config.kerberos.impersonation.enabled=false Troubleshooting Activating Kerberos debug logging If Kerberos authentication fails, activating Kerberos debug logging may provide insight into why this is happening. To activate Kerberos debug logging, add the following line to the KNIME Server preferences.epf: /instance/org.knime.bigdata.commons/org.knime.bigdata.config.kerberos.logging.enabled=true Then, restart the KNIME Server Executor and run a workflow that accesses a Kerberos-secured service. The knime.log will then contain Kerberos debug messages. You can find the knime.log on the KNIME Server machine under: <server-repository>/runtime/runtime_knime-rmi_<suffix>/.metadata/knime/knime.log Depending on the configuration, <suffix> is either a number, a username, or a combination of both.
https://docs.knime.com/2019-12/bigdata_secured_cluster_connection_guide/index.html
2020-03-28T17:27:12
CC-MAIN-2020-16
1585370492125.18
[array(['./img/impersonation_overview.png', 'impersonation overview'], dtype=object) ]
docs.knime.com
Auditing¶ New in version 2.6. MongoDB Enterprise includes an auditing capability for mongod and mongos instances. The auditing facility allows administrators and users to track system activity for deployments with multiple users and applications. Enable and Configure Audit Output¶ The auditing facility can write audit events to the console, the syslog, a JSON file, or a BSON file. To enable auditing for MongoDB Enterprise, see Configure Auditing. For information on the audit log messages, see System Event Audit Messages. Audit Events and Filter¶ Once enabled, the auditing system can record the following operations: - schema (DDL), - replica set and sharded cluster, - authentication and authorization, and - CRUD operations (requires auditAuthorizationSuccessset to true). For details on audited actions, see Audit Event Actions, Details, and Results. With the auditing system, you can set up filters to restrict the events captured. To set up filters, see Configure Audit Filters. Audit Guarantee¶ The auditing system writes every audit event [1] to an in-memory buffer of audit events. MongoDB writes this buffer to disk periodically. For events collected from any single connection, the events have a total order: if MongoDB writes one event to disk, the system guarantees that it has written all prior events for that connection to disk. If an audit event entry corresponds to an operation that affects the durable state of the database, such as a modification to data, MongoDB will always write the audit event to disk before writing to the journal for that entry. That is, before adding an operation to the journal, MongoDB writes all audit events on the connection that triggered the operation, up to and including the entry for the operation. These auditing guarantees require that MongoDB run with journaling enabled. Warning.
https://docs.mongodb.com/v3.4/core/auditing/
2020-03-28T18:59:39
CC-MAIN-2020-16
1585370492125.18
[]
docs.mongodb.com
Can I Attach a File to Replies in Pitchbox? Yes! Click the blue text that says 'Attach File' next to the 'To' field when replying. After you click 'Attach File' you will see an Attachments box populate in the workspace. You can choose to upload a file or to drop files directly into this space.
https://docs.pitchbox.com/article/81-can-i-attach-a-file-to-replies-in-pitchbox
2020-03-28T16:50:45
CC-MAIN-2020-16
1585370492125.18
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598cc819042863033a1be2da/images/5aec88f50428631126f1b7be/file-LulUfZGrU2.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598cc819042863033a1be2da/images/5aec89992c7d3a3f981f441c/file-ggfCGzovOG.png', None], dtype=object) ]
docs.pitchbox.com
Opening Modes If a menu item in the RadMenu control contains any children, they are shown depending on a user action. There are two opening modes that recognize the following two actions: The child items are shown when the mouse is over the parent. The child items are shown when the user clicks on the parent.> and upon a mouse click:
https://docs.telerik.com/devtools/wpf/controls/radmenu/features/opening-modes
2020-03-28T18:57:25
CC-MAIN-2020-16
1585370492125.18
[array(['images/RadMenu_Features_Opening_Modes_01.png', None], dtype=object) array(['images/RadMenu_Features_Opening_Modes_02.png', None], dtype=object) array(['images/RadMenu_Features_Opening_Modes_03.png', None], dtype=object)]
docs.telerik.com
Previous work¶ A number of informal discussions have taken place between the FDA and the Nightscout contributors. The discussions have revealed a need to establish a framework for open source authors and FDA to work together in order to best protect and promote public safety. We seek the FDA’s guidance in finding similar frameworks from other regulated or life-critical areas such as defense, aviation, and automotive industries.
https://nightscout-fda-presubmission-01.readthedocs.io/en/latest/03-previous-work.html
2020-03-28T18:18:27
CC-MAIN-2020-16
1585370492125.18
[]
nightscout-fda-presubmission-01.readthedocs.io
CIFS and MAPI Issue: A domain controller is removed from the network. However, the Citrix SD-WAN WANOP appliance is not able to leave the domain. Cause: This is a known issue with the appliance. Workaround: From the Windows Domain page, change the DNS to the one through which you can resolve the intended domain. Next, use the Rejoin Domain option to make the Citrix SD-WAN WANOP appliance join that domain. Now try leaving from the domain. Issue: MAPI connections are not optimized and the following error message appears: non-default setting in outlook is not supported Cause: This is a known issue with release 6.2.3 and earlier releases. Resolution: Upgrade the appliance to the latest release. Issue: The appliance optimized the MAPI connections. However, the monitoring pages display the number of send and received bytes as zero. Cause: This is a known issue with the appliance. Resolution: This is a benign issue and does not affect the functionality of the appliance. You can ignore it. Issue: Unable to establish secure peering between Citrix SD-WAN WANOP appliances. Cause: Secure peering with the partner appliance is not properly configured. Resolution: Do the following: Verify that you have uploaded appropriate combination of CA and server certificates to the appliance. Navigate to the Citrix SD-WAN WANOP > Configuration > SSL Settings > Secure Partners page. In the Partner Security section, under Certificate Verification, select None - allow all requests option to make sure that certificate never expires. Verify that the appliance can establish secure peering with the partner appliance. Verify that the Listen On section has an entry for the IP address of the intended Citrix SD-WAN WANOP appliance. Issue: When connecting to an Exchange cluster, Outlook users with optimized connections are occasionally bypassed or prompted for logon credentials. Cause: MAPI optimization requires that each node in the Exchange cluster be associated with the exchangeMDB service principal name (SPN). Over time, as you need more capacity, you add additional nodes to the cluster. However, sometimes, the configuration task might not be completed, leaving some nodes in cluster without SPN settings. This issue is most prevalent in Exchange clusters with Exchange Server 2003 or Exchange Server 2007. Resolution: Do the following on each Exchange servers in the set up: Access the domain controller. Open the command prompt. Run the following commands: pre codeblock setspn -A exchangeMDB/Exchange1 Exchange1 setspn -A exchangeMDB/Exchange1.example.com Exchange1 Issue: When attempting to connect to Outlook, the Trying to connect message is displayed and then the connection is terminated. Cause: The client-side Citrix SD-WAN WANOP appliance has blacklist entries that do not exist on the server-side appliance. Resolution: Remove the blacklist entries from both appliances, or (recommended) upgrade the software of the appliances to release 6.2.5 or later. Issue: The appliance fails to join the domain even after passing the pre domain checks. Cause: This is a known issue. Resolution: Do the following: Access the appliance by using an SSH utility. Log on to the appliance by using the root credentials. Run the following command: /opt/likewise/bin/domainjoin-cli join \<Domain\_Name\> administrator Issue: The LdapError error message appears when you add a delegate user to the Citrix SD-WAN WANOP appliance. Resolution: Do one of the following: On the Citrix SD-WAN WANOP appliance’s DNS server, verify that a reverse lookup zone is configured for every domain-controller IP address. Verify that the system clock of the client machine is synchronized with the system clock of the Active Directory server. When using Kerberos, these clocks must be synchronized. Update the delegate user on the Windows Domain page by providing the password for the delegate user once again. Issue: The Time skew error message appears when you add a delegate user to the Citrix SD-WAN WANOP appliance. Resolution: Verify that the appliance is joined to the domain. If not, join the appliance to the domain. This synchronizes the appliance time with the domain-server time and resolves the issue. Issue: The Client is temporarily excluded for acceleration. Last Error (Kerberos error.) error message appears when you add a delegate user to the Citrix SD-WAN WANOP appliance. Cause: The delegate user is configured for the Use Kerberos only authentication. Resolution: Verify that, on the domain controller, the delegate user’s authentication setting is Use any authentication protocol. Issue: The Delegate user not ready error message appears when you add a delegate user to the Citrix SD-WAN WANOP appliance. Resolution: If the message appears only on the client-side appliance, ignore it. However, if the message is displayed on the server-side appliance, run the delegate user precheck tool, available on the Windows Domain page, and then configure the delegate user on the server-side appliance. Issue: The Last Error (The Server is not delegated for Kerberos authentication. Please add delegate user, check list for services and server allowed for delegation.) UR:4 error message appears when you add a delegate user to the Citrix SD-WAN WANOP appliance. Resolution: Verify that the delegate user is correctly configured on the domain controller and that you have added appropriate services to the domain controller. Issue: The appliance is not able to join the domain. Resolution: Run the domain precheck tool, available on the Windows Domain page, and resolve the issues, if any. If the domain precheck tool does not report any issues, contact Citrix Technical Support for further assistance in resolving the issue.
https://docs.citrix.com/en-us/citrix-sd-wan-wanop/11/troubleshooting/cifs-and-mapi.html
2020-03-28T18:58:15
CC-MAIN-2020-16
1585370492125.18
[]
docs.citrix.com