_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d301 | train | The trick was to put the start character symbol '^' before the value being searched on and end character symbol '$' after the value. Without giving these two symbols the regex will always return nothing.
Fixed portion:
var table = null;
$(document).ready(function(){
table = $('#searchTable').DataTable( {
"sPaginationType": "full_numbers",
"iDisplayLength": 5
} );
table.columns(0).search('^0$',true,false).draw();
} );
A: In case somebody else face this problem. In my case the problem was related to searching for string characters that have special meaning in a regular expression (eg. "AC (JLHA2) - GB/T1179-2008" will give nothing even if the data exists in the table).
I was able to fix this by using $.fn.dataTable.util.escapeRegex() to escape all special characters.
Here is the fix:
var table = null;
$(document).ready(function(){
table = $('#searchTable').DataTable( {
"sPaginationType": "full_numbers",
"iDisplayLength": 5
} );
// Escape the expression so we can perform a regex match
var val = $.fn.dataTable.util.escapeRegex('AC (JLHA2) - GB/T1179-2008');
table.columns(0).search(val ? '^' + val + '$' ; '', true, false).draw();
} ); | unknown | |
d302 | train | Try Regex: (?P<as_path>(?:\d{4,10}\s){1,20})\s+(?P<peer_addr>\d{0,3}\.\d{0,3}\.\d{0,3}\.\d{0,3}).*\((?P<peer_rid>\d{0,3}\.\d{0,3}\.\d{0,3}\.\d{0,3})\)\s+.*localpref\s(?P<local_pref>\d+),\s(?P<attribs>[\S]+,(?: [\S]+,?)*){0,4}
Demo
Regex in the question had a capturing group (Group 2) for (\d{4,10}\s). it is changed to a non capturing group now (?:\d{4,10}\s)
A: See regex in use here.
(?P<as_path>(?:\d{4,10}\s){1,20})\s+(?P<peer_addr>\d{0,3}(?:\.\d{0,3}){3}).*\((?P<peer_rid>\d{0,3}(?:\.\d{0,3}){3})\)\s+.*localpref\s(?P<local_pref>\d+),\s+(?P<attribs>\S+(?:,\s+\S+){2})
*
*You were getting group 2 because your as_path group contained a group. I changed that to a non-capturing group.
*I changed attribs to \S+(?:,\s+\S+){2}
*
*This will match any non-space character one or more times \S+, followed by the following exactly twice:
*
*,\s+\S+ the comma character, followed by the space character one or more times, followed by any non-space character one or more times
*I changed peer_addr and peer_rid to \d{0,3}(?:\.\d{0,3}){3} instead of \d{0,3}\.\d{0,3}\.\d{0,3}\.\d{0,3}. This is a preference, but shortens the expression.
Without that last modification, you can use the following regex (it performs slightly better anyway (as seen here):
(?P<as_path>(?:\d{4,10}\s){1,20})\s+(?P<peer_addr>\d{0,3}\.\d{0,3}\.\d{0,3}\.\d{0,3}).*\((?P<peer_rid>\d{0,3}\.\d{0,3}\.\d{0,3}\.\d{0,3})\)\s+.*localpref\s(?P<local_pref>\d+),\s+(?P<attribs>\S+(?:,\s+\S+){2})
You can also improve the performance by using more specific tokens as the following suggests (notice I also added the x modifier to make it more legible) and as seen here:
(?P<as_path>\d{4,10}(?:\s\d{4,10}){0,19})\s+
(?P<peer_addr>\d{0,3}\.\d{0,3}\.\d{0,3}\.\d{0,3})[^)]*
\((?P<peer_rid>\d{0,3}\.\d{0,3}\.\d{0,3}\.\d{0,3})\)\s+
.*localpref\s(?P<local_pref>\d+),\s+
(?P<attribs>\w+(?:,\s+\w+){2})
A: You get that separate group because your are repeating a capturing group were the last iteration will be the capturing group, in this case 88945 You could make it non capturing instead (?:
For the second part you could use an alternation to exactly match one of the options (?:valid|external|best)
Your pattern might look like:
(?P<as_path>(?:\d{4,10}\s){1,20})\s+(?P<peer_addr>\d{0,3}\.\d{0,3}\.\d{0,3}\.\d{0,3}).*\((?P<peer_rid>\d{0,3}\.\d{0,3}\.\d{0,3}\.\d{0,3})\)\s+.*localpref\s(?P<local_pref>\d+),\s(?P<attribs>(?:valid|external|best)(?:,\s{0,4}(?:valid|external|best))+)
regex101 demo | unknown | |
d303 | train | Modern PC's use floating point numbers to calculate non-integral values.
These come in two standardized variants: float and double, where the latter is twice the size of the former.
Matlab, by default uses (complex) doubles for all its calculations.
You can force it to use float (or as Matlab calls them, single) by specifiying the type:
a = single([20, 25.0540913632159, 16.2750000000000, 3.08852992798468]);
This should use half the memory, and you lose some precision that may or may not be important in your application. Make sure the optimization is worth it before doing this, as execution speed may even be slower (due to builtin functions only operating on double, hence requiring two conversions extra). | unknown | |
d304 | train | After inspection of Laravel's Http Request and Route classes, I found the route() and setAction() methods could be useful.
So I created a middleware to handle this:
<?php namespace App\Http\Middleware;
class Ajax {
public function handle($request, Closure $next)
{
// Looks for the value of request parameter called "ajax"
// to determine controller's method call
if ($request->ajax()) {
$routeAction = $request->route()->getAction();
$ajaxValue = studly_case($request->input("ajax"));
$routeAction['uses'] = str_replace("@index", "@ajax".$ajaxValue, $routeAction['uses']);
$routeAction['controller'] = str_replace("@index", "@ajax".$ajaxValue, $routeAction['controller']);
$request->route()->setAction($routeAction);
}
return $next($request);
}
}
Now my route looks like:
Route::any('some/page/', ['as' => 'some-page', 'middleware'=>'ajax', 'uses' => 'SomePageController@index']);
And correctly hits my controller methods (without disturbing Laravel's normal flow):
<?php namespace App\Http\Controllers;
class SomePageController extends Controller {
public function index()
{
return view('some.page.index');
}
public function ajaxMyAction(Requests\SomeFormRequest $request){
die('Do my action here!');
}
public function ajaxMyOtherAction(Requests\SomeFormRequest $request){
die('Do my other action here!');
}
...
I think this is a fairly clean solution.
A: You can't make this dispatch in the routing layer if you keep the same URL. You have two options :
*
*Use different routes for your AJAX calls. For example, you can prefix all your ajax calls by /api. This is a common way :
Route::group(['prefix' => 'api'], function()
{
Route::get('items', function()
{
//
});
});
*If the only different thing is your response format. You can use a condition in your controller. Laravel provides methods for that, for example :
public function index()
{
$items = ...;
if (Request::ajax()) {
return Response::json($items);
} else {
return View::make('items.index');
}
}
You can read this http://laravel.com/api/5.0/Illuminate/Http/Request.html#method_ajax and this http://laravel.com/docs/5.0/routing#route-groups if you want more details. | unknown | |
d305 | train | Your var = xmlhttp; is outside of switchText scope and so it's undefined and throws an error.
Try this
<html>
<head>
<script type="text/javascript">
var xmlhttp;
function loadXMLDoc()
{
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
}
function switchText()
{loadXMLDoc();
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","ajax_info.txt",true);
xmlhttp.send();
}
</script>
</head>
<body>
<div id="myDiv"><h2>Let AJAX change this text</h2></div>
<button type="button" onclick="switchText()">Change Content</button>
</body>
</html>
A: I think the issue you have is that you have not validated your code, even down to whether you have matching curly braces or not (hint, you do not!)
moving the open and send commands back into the first function and removing the extra curly brace shoudl work.
the below should work :
<html>
<head>
<script type="text/javascript">
var xmlhttp;
function loadXMLDoc()
{
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.open("GET","ajax_info.txt",true);
xmlhttp.send();
}
function switchText()
{
loadXMLDoc();
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
}
}
}
</script>
</head>
<body>
<div id="myDiv"><h2>Let AJAX change this text</h2></div>
<button type="button" onclick="switchText()">Change Content</button>
</body>
</html>
hope that helps
Olly
A: I think the problem is with the following line in ajax_object.html file:
if (xmlhttp.readyState==4 && xmlhttp.status==200)
If you run the file with the above line and look at the 'Show Page Source',
it will be apparent that the 'Request & Response' header has its
-- Status and Code --- set to nothing
So, delete the line and you will get:
<!DOCTYPE html>
<html>
<head>
<title></title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<!--script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js">
</script-->
<script>
function loadXMLDoc()
{
var xmlhttp;
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function(){
document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
}
xmlhttp.open("GET","ajax_info.txt",true);
xmlhttp.send();
}
</script>
</head>
<body>
<div id="myDiv"><h2>Let AJAX change this text</h2></div>
<button type="button" onclick="loadXMLDoc()">Change Content</button>
</body>
</html>
If you run this code it will output the ajax_info.txt file. | unknown | |
d306 | train | From your images it seems like that you don't set the top constraint of the top view to top safeAreaLayoutGuide instead you set it to superView here
, also you can't set the top of the button to safeArea , as it's only appears for direct subviews of thw main vc's view not to nested subviews | unknown | |
d307 | train | yum -y remove php* to remove all php packages then you can install the 5.6 ones.
A: Subscribing to the IUS Community Project Repository
cd ~
curl 'https://setup.ius.io/' -o setup-ius.sh
Run the script:
sudo bash setup-ius.sh
Upgrading mod_php with Apache
This section describes the upgrade process for a system using Apache as the web server and mod_php to execute PHP code. If, instead, you are running Nginx and PHP-FPM, skip ahead to the next section.
Begin by removing existing PHP packages. Press y and hit Enter to continue when prompted.
sudo yum remove php-cli mod_php php-common
Install the new PHP 7 packages from IUS. Again, press y and Enter when prompted.
sudo yum install mod_php70u php70u-cli php70u-mysqlnd
Finally, restart Apache to load the new version of mod_php:
sudo apachectl restart
You can check on the status of Apache, which is managed by the httpd systemd unit, using systemctl:
systemctl status httpd | unknown | |
d308 | train | Try
<tr ng-repeat="pelanggan in t.pelangganArr">
A: In your controller declare pelangganArr as $scope.pelangganArr.
Only scope variables are recognised by angular in the DOM and provide 2 way binding. | unknown | |
d309 | train | I suppose com.fasterxml.jackson's @JsonIgnore annotation should help.
public class Entity {
private String name;
@JsonIgnore
private String entityType;
@JsonIgnore
private Entity rootEntity;
}
A: In Json-lib you have a JsonConfig to specify the allowed fields:
JsonConfig jsonConfig=new JsonConfig();
jsonConfig.registerPropertyExclusion(Entity.class,"rootEntity");
jsonConfig.registerPropertyExclusion(Entity.class,"entityType");
JSON json = JSONSerializer.toJSON(objectToWrite,jsonConfig); | unknown | |
d310 | train | While the Dropbox API was designed with the intention that each user would link their own Dropbox account, in order to interact with their own files, it is technically possible to connect to just one account. We generally don't recommend doing so, for various technical and security reasons, but those won't apply if you're the only user anyway.
So, there are two ways to go about this:
1) Implement the normal app authorization flow as documented, and log in and authorize the app once per app installation. The SwiftyDropbox SDK will store the resulting access token for you, which you can programmatically re-use after that point each time using authorizedClient.
2) Manually retrieve an access token for your account and hard code it in to the app, using the DropboxClient constructor shown here under "Initialize with manually retrieved auth token". | unknown | |
d311 | train | Per the Dockerfile ARG docs,
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg = flag.
in order to accept an argument as part of the build, we use --build-arg.
Dockerfile ENV docs:
The ENV instruction sets the environment variable to the value .
We also need to include an ENV statement because the CMD will be executed after the build is complete, and the ARG will not be available.
FROM busybox
ARG ENVIRONMENT
ENV ENVIRONMENT $ENVIRONMENT
CMD echo $ENVIRONMENT
will cause an environment variable to be set in the image, so that it is available during a docker run command.
docker build -t test --build-arg ENVIRONMENT=awesome_environment .
docker run -it test
This will echo awesome_environment.
A: Try changing your RUN command do this:
RUN npm run ng build --configuration=$ENVIRONMENT
This should work. Check here
Thanks. | unknown | |
d312 | train | You declare i within the for loop without initialising it. This is the reason you get 'weird values'. In order to rectify, you need to write:
for(int i=0; i<5; i++)
Hope this helps!
A: Just copy the bytes:
memcpy(newID, chID, 4);
A: One more note that it seems some people have overlooked here: if chId is length 4 then the loop bounds are i=0;i<4. That way you get i=0,1,2,3. (General programming tip, unroll loops in your head when possible. At least until you are satisfied that the program really is doing what you meant it to.)
NB: You're not copying chId into a string. You're copying it into a char array. That may seem like semantics, but "string" names a data type in C++ which is distinct from an array of characters. Got it right in the title, wrong in the question description. | unknown | |
d313 | train | No such thing is built in, because it doesn't need to be. Unlike destructuring, which is fairly involved, constructing maps is very simple in Clojure, and so fancy ways of doing it are left for ordinary libraries. For example, I long ago wrote flatland.useful.map/keyed, which mirrors the three modes of map destructuring:
(let [transforms {:keys keyword
:strs str
:syms identity}]
(defmacro keyed
"Create a map in which, for each symbol S in vars, (keyword S) is a
key mapping to the value of S in the current scope. If passed an optional
:strs or :syms first argument, use strings or symbols as the keys instead."
([vars] `(keyed :keys ~vars))
([key-type vars]
(let [transform (comp (partial list `quote)
(transforms key-type))]
(into {} (map (juxt transform identity) vars))))))
But if you only care about keywords, and don't demand a docstring, it could be much shorter:
(defmacro keyed [names]
(into {}
(for [n names]
[(keyword n) n])))
A: I find that I quite frequently want to either construct a map from individual values or destructure a map to retrieve individual values. In the Tupelo Library I have a handy pair of functions for this purpose that I use all the time:
(ns tst.demo.core
(:use demo.core tupelo.core tupelo.test))
(dotest
(let [m {:a 1 :b 2 :c 3}]
(with-map-vals m [a b c]
(spyx a)
(spyx b)
(spyx c)
(spyx (vals->map a b c)))))
with result
; destructure a map into values
a => 1
b => 2
c => 3
; construct a map
(vals->map a b c) => {:a 1, :b 2, :c 3}
P.S. Of course I know you can destructure with the :keys syntax, but it always seemed a bit non-intuitive to me. | unknown | |
d314 | train | As @Louwki said, you can use a Trait to do that, in my case I did something like this:
trait SaveToUpper
{
/**
* Default params that will be saved on lowercase
* @var array No Uppercase keys
*/
protected $no_uppercase = [
'password',
'username',
'email',
'remember_token',
'slug',
];
public function setAttribute($key, $value)
{
parent::setAttribute($key, $value);
if (is_string($value)) {
if($this->no_upper !== null){
if (!in_array($key, $this->no_uppercase)) {
if(!in_array($key, $this->no_upper)){
$this->attributes[$key] = trim(strtoupper($value));
}
}
}else{
if (!in_array($key, $this->no_uppercase)) {
$this->attributes[$key] = trim(strtoupper($value));
}
}
}
}
}
And in your model, you can specify other keys using the 'no_upper' variable. Like this:
// YouModel.php
protected $no_upper = ['your','keys','here'];
A: Was a lot easier than I through. Solution that is working for me using traits, posting it if anyone also run into something like this.
<?php
namespace App\Traits;
trait SaveToUpper
{
public function setAttribute($key, $value)
{
parent::setAttribute($key, $value);
if (is_string($value))
$this->attributes[$key] = trim(strtoupper($value));
}
}
}
UPDATE:
For Getting values as upper case you can add this to the trait or just add it as a function in the model:
public function __get($key)
{
if (is_string($this->getAttribute($key))) {
return strtoupper( $this->getAttribute($key) );
} else {
return $this->getAttribute($key);
}
} | unknown | |
d315 | train | So you want each group ordered internally, and the groups order by the latest value, right? Okay, I think we can do that...
var query = from action in actions
group action by action.Uid into g
orderby g.Max(action => action.Created) descending
select new { Uid = g.Key,
Actions = g.OrderByDescending(action => action.Created) };
foreach (var group in query)
{
Console.WriteLine("Uid: {0}", group.Uid);
foreach (var action in group.Actions)
{
Console.WriteLine(" {0}: {1}", action.Created, action.ActionId);
}
}
A: For the SQL, get the sort column in the SELECT statement
SELECT *, (SELECT MAX(created) FROM actions a2 where a.uid = a2.uid) AS MaxCreated
FROM actions a
ORDER BY MaxCreated desc, a.created desc
or
SELECT *
FROM actions a
ORDER BY (SELECT MAX(created) FROM actions a2 where a.uid = a2.uid) desc, a.created desc
(just fixed an error in the first query)
Here's my linq:
var actions = (from a in actions
orderby ((from a2 in actions
where a2.UserID == a.UserID
select a2.created).Max ()) descending, a.created descending
select a); | unknown | |
d316 | train | As it turns out, with the default OpenSSL (which is bundled with node, but if you've built your own, it is possible to configure different engines), the algorithm to generate random data is exactly the same for both randomBytes (RAND_bytes) and pseudoRandomBytes (RAND_pseudo_bytes).
The one and only difference between the two calls depends on the version of node you're using:
*
*In node v0.12 and prior, randomBytes returns an error if the entropy pool has not yet been seeded with enough data. pseudoRandomBytes will always return bytes, even if the entropy pool has not been properly seeded.
*In node v4 and later, randomBytes does not return until the entropy pool has enough data. This should take only a few milliseconds (unless the system has just booted).
Once the the entropy pool has been seeded with enough data, it will never "run out," so there is absolutely no effective difference between randomBytes and pseudoRandomBytes once the entropy pool is full.
Because the exact same algorithm is used to generate randrom data, there is no difference in performance between the two calls (one-time entropy pool seeding notwithstanding).
A: Just a clarification, both have the same performance:
var crypto = require ("crypto")
var speedy = require ("speedy");
speedy.run ({
randomBytes: function (cb){
crypto.randomBytes (256, cb);
},
pseudoRandomBytes: function (cb){
crypto.pseudoRandomBytes (256, cb);
}
});
/*
File: t.js
Node v0.10.25
V8 v3.14.5.9
Speedy v0.1.1
Tests: 2
Timeout: 1000ms (1s 0ms)
Samples: 3
Total time per test: ~3000ms (3s 0ms)
Total time: ~6000ms (6s 0ms)
Higher is better (ops/sec)
randomBytes
58,836 ± 0.4%
pseudoRandomBytes
58,533 ± 0.8%
Elapsed time: 6318ms (6s 318ms)
*/
A: If it's anything like the standard PRNG implementations in other languages, it is probably either not seeded by default or it is seeded by a simple value, like a timestamp. Regardless, the seed is possibly very easily guessable. | unknown | |
d317 | train | EntityManager.executeQueryLocally is a synchronous function and you can use its result immediately. i.e.
var myEntities = myEntityManager.executeQueryLocally(myQuery);
Whereas EntityManager.executeQuery is an asynchonous function ( even if the query has a 'using' call that specifies that this is a local query). So you need to call it like this:
var q2 = myQuery.using(breeze.FetchStrategy.FromLocalCache);
myEntityManager.executeQuery(q2).then(function(data) {
var myEntities = data.results;
});
The idea behind this is that with executeQuery you treat all queries in exactly the same fashion, i.e. asynchronously, regardless of whether they are actually asynchronous under the hood.
If you want to create an EntityManager that does not go to the server for metadata you can do the following:
var ds = new breeze.DataService({
serviceName: "none",
hasServerMetadata: false
});
var manager = new breeze.EntityManager({
dataService: ds
}); | unknown | |
d318 | train | I'm assuming that you have started with the following as it looks similar to the URL that you have created
http://docs.aws.amazon.com/AWSECommerceService/latest/GSG/SubmittingYourFirstRequest.html
Double check the timestamp as the page mentions it can't be more than 15 minutes old
But I'm afraid I don't know that API well enough to know how to get the signature setup correctly but have you considered using a library
This seems like a nice example of what can be achieved with the library http://exeu.github.io/apai-io/ | unknown | |
d319 | train | You'll have discovered that your compiler doesn't like the line
REAL :: y(0:n+1) = (/(k, k=a,b,h)/)
Change it to
REAL :: y(0:n+1) = [(k, k=INT(a),INT(b),2)]
that is, make the lower and upper bounds for k into integers. I doubt that you will ever be able to measure any increase in efficiency, but this change might appeal to your notions of nice-looking and convenient code.
You might also want to tweak the way you initialise M. I'd have written your two loops as
M = 0.0
DO i = 1,n
M(i,i) = y(i)**2
END DO
Overall, though, your question is a bit vague so I'm not sure how satisfactory this answer will be. If not enough, clarify your question some more. | unknown | |
d320 | train | You can try using Text Component Line Number. | unknown | |
d321 | train | Try to add , after "userAccountResource" like this
.factory("userAccountResource", //, here was missing
["$resource",
userAccountResource]); | unknown | |
d322 | train | You have to put list of data in a scope
try something like this:
public List<String> getMyList() {
myList.clear();
List<String> list = (List<String>) AdfFacesContext.getCurrentInstance().getProcessScope().get("myList");
if (list != null) {
for (String var : list) {
myList.add(var);
}
}
return myList;
}
You can also see this question and answer :
How to refresh table within a popup in dialog window in ADF Oracle 11gR1
A: The problem is that I define the setNameList() in managedbean and have to invoke setNameList() in another method in Class B.
I new a fresh managedBean to call this method and the nameList in this instance is not the one bonded to the page.
Solution:
In class B, get the right instance as:
ManagedBean managedBean = (ManagedBean)ADFUtil.evaluateEL("#{pageFlowScope.ManagedBean}");
The issue is gone. | unknown | |
d323 | train | In Python, do the following where alwayssep is the expression and line is the passed string:
line = re.sub(alwayssep, r' \g<0> ', line)
A: My Pythonizer converts that to this:
line = re.sub(re.compile(alwayssep),r' \g<0> ',line,count=0) | unknown | |
d324 | train | This document addresses issues on what you can, or rather, cannot do as Instance Administrators. You are permitted to change what you have access to the web UI and SMTP parameters using the APEX_INSTANCE_ADMIN package. | unknown | |
d325 | train | you can try something like this.
<table>
<thead>
<tr>
{% for key in groups.keys() %}
<th>{{ key|title }}</th>
{% endfor %}
</tr>
</thead>
<tbody>
<tr>
{% for key in groups.keys() %}
<td>{{ groups[key]}}</td>
{% endfor %}
</tr>
</tbody>
</table> | unknown | |
d326 | train | I diff'ed the project against an earlier version I'd kept that worked properly and came up with this fix:
In Xcode, under your Phonegap or Cordova project, select
Target -> Build Phases -> Compile Sources
Add your plugin into the list there, in this case CVLogger.m located in your file structure under "Plugins".
After this, the project compiles without error and the console plugin works. No need to reinstall and reconfigure your entire project for this... | unknown | |
d327 | train | Your superclass PointF is not serialisable. That means that the following applies:
To allow subtypes of non-serializable classes to be serialized, the subtype may assume responsibility for saving and restoring the state of the supertype's public, protected, and (if accessible) package fields. The subtype may assume this responsibility only if the class it extends has an accessible no-arg constructor to initialize the class's state. It is an error to declare a class Serializable if this is not the case. The error will be detected at runtime.
During deserialization, the fields of non-serializable classes will be initialized using the public or protected no-arg constructor of the class. A no-arg constructor must be accessible to the subclass that is serializable. The fields of serializable subclasses will be restored from the stream.
See: http://docs.oracle.com/javase/7/docs/api/java/io/Serializable.html
You will need to look at readObject and writeObject:
Classes that require special handling during the serialization and deserialization process must implement special methods with these exact signatures:
private void writeObject(java.io.ObjectOutputStream out)
throws IOException
private void readObject(java.io.ObjectInputStream in)
throws IOException, ClassNotFoundException;
See also here: Java Serialization with non serializable parts for more tips and tricks.
A: I finally found the solution. Thanx @Greg and the other comment that has now been deleted. The solution is that instead of extending these objects we can make stub objects. As i was calling super from constructor the x and y fields were inherited from base class that is not serializable. so they were not serialized and there values were not sent.
so i mofidfied my class as per your suggestions
public class MyPointF implements Serializable {
/**
*
*/
private static final long serialVersionUID = -455530706921004893L;
public float x;
public float y;
public MyPointF(float x, float y) {
this.x = x;
this.y = y;
}
} | unknown | |
d328 | train | You could do this:
public override string DoSomething()
{
//does something...
base.DoSomething();
return GetName().Result;
}
Warning: this can cause a deadlock
See Don't block on async code | unknown | |
d329 | train | This has nothing to do with React Native, one of your resource files references an nonexisting value (dialogCornerRadius). Locate the reference (Android Studio to the rescue) and fix it. | unknown | |
d330 | train | These following guidelines may help you with initializing a Jenkins freestyle job for building a subproject rather than building all projects included in a git repo.
*
*Install git-plugin for Jenkins
*Create a freestyle job and add your git hub repository's link on SCM repository field
*
*New Item -->
*Name the item and OK -->
*Select Git in SCM -->
*Add repository URL -->
*Add invoke top-level maven targets as Build steps -->
*In Goals install -pl ChildProjectD
*(optional) Add post-build and other configurations
It will build the Child project as you want instead of full project.
You can refer Jenkins GitHub Java Application Project Build Configuration Maven to get more help. Check also mvn install -pl --help for more info.
Feel free to ask questions. | unknown | |
d331 | train | Not over the internet, as that would be very dangerous, the user would have to have special software. Otherwise web programs could (very) easily be used for malicious purposes. | unknown | |
d332 | train | Brutally:
function formatValue(value) {
var tempVal = Math.trunc(value * 1000);
var lastValue = (tempVal % 10);
if (lastValue > 0 && lastValue <= 5) lastValue = 5;
else if (lastValue > 5 && lastValue <= 9) lastValue = 10;
else lastValue = 0;
return parseFloat((Math.trunc(tempVal / 10) * 10 + lastValue) / 1000).toFixed(3);
}
formatValue(3.656); // -> "3.660"
formatValue(3.659); // -> "3.660"
formatValue(3.660); // -> "3.660"
formatValue(3.661); // -> "3.665"
formatValue(3.664); // -> "3.665"
formatValue(3.665); // -> "3.665"
Pay attention: function returns a string (.toFixed returns a string).. (but however a fixed decimal length doesn't have any sense in a number)
A: Rounding to a certain number of decimals is done by multiplying the value to bring the desired amount of decimals into the integer range, then getting rid of the remaining decimals, then dividing by the same multiplier to make it decimal again.
Rounding to a "half-decimal" as you want is accomplished by doubling the multiplier (2X instead of 1X).
The + 0.005 is to make it round up as desired, otherwise it would always round down.
toFixed() is used to make the string representation of the value have the decimal part padded with zeros as needed.
function formatValue(value) {
return (Math.floor((value + 0.005) * 200) / 200).toFixed(3);
}
console.log(formatValue(1.950));
console.log(formatValue(1.954));
console.log(formatValue(1.956));
console.log(formatValue(1.003));
console.log(formatValue(1.007)); | unknown | |
d333 | train | The typical purpose for this style is in use for object construction.
Person* pPerson = &(new Person())->setAge(34).setId(55).setName("Jack");
instead of
Person* pPerson = new Person( 34, 55, "Jack" );
Using the second more traditional style one might forget if the first value passed to the constructor was the age or the id? This may also lead to multiple constructors based on the validity of some properties.
Using the first style one might forget to set some of the object properties and and may lead bugs where objects are not 'fully' constructed. (A class property is added at a later point but not all the construction locations got updated to call the required setter.)
As code evolves I really like the fact that I can use the compiler to help me find all the places where an object is created when changing the signature of a constructor. So for that reason I prefer using regular C++ constructors over this style.
This pattern might work well in applications that maintain their datamodel over time according to rules similar to those used in many database applications:
*
*You can add a field/attribute to a table/class that is NULL by default. (So upgrading existing data requires just a new NULL column in the database.)
*Code that is not changes should still work the same with this NULL field added.
A: Not all the setters, but some of them could return reference to object to be useful.
kind of
a.SetValues(object)(2)(3)(5)("Hello")(1.4);
I used this once long time ago to build SQL expression builder which handles all the Escapes problems and other things.
SqlBuilder builder;
builder.select( column1 )( column2 )( column3 ).
where( "=" )( column1, value1 )
( column2, value2 ).
where( ">" )( column3, 100 ).
from( table1 )( "table2" )( "table3" );
I wasn't able to reproduce sources in 10 minutes. So implementation is behind the curtains.
A: If your motivation is related to chaining (e.g. Brian Ensink's suggestion), I would offer two comments:
1.
If you find yourself frequently settings many things at once, that may mean you should produce a struct or class which holds all of these settings so that they can all be passed at once. The next step might be to use this struct or class in the object itself...but since you're using getters and setters the decision of how to represent it internally will be transparent to the users of the class anyways, so this decision will relate more to how complex the class is than anything.
2.
One alternative to a setter is creating a new object, changing it, and returning it. This is both inefficient and inappropriate in most types, especially mutable types. However, it's an option that people sometimes forget, despite it's use in the string class of many languages.
A: This technique is used in the Named parameter Idiom.
A: IMO setters are a code smell that usually indicate one of two things:
Making A Mountian Out Of A Molehill
If you have a class like this:
class Gizmo
{
public:
void setA(int a) { a_ = a; }
int getA() const { return a_; }
void setB(const std::string & b) { v_ = b; }
std::string getB() const { return b_; }
private:
std::string b_;
int a_;
};
... and the values really are just that simple, then why not just make the data members public?:
class Gizmo
{
public:
std::string b_;
int a_;
};
...Much simpler and, if the data is that simple you lose nothing.
Another possibility is that you could be
Making A Molehill Out Of A Mountian
Lots of times the data is not that simple: maybe you have to change multiple values, do some computation, notify some other object; who knows what. But if the data is non-trivial enough that you really do need setters & getters, then it is non-trivial enough to need error handling as well. So in those cases your getters & setters should be returning some kind of error code or doing something else to indicate something bad has happened.
If you are chaining calls together like this:
A.doA().doB().doC();
... and doA() fails, do you really want to be calling doB() and doC() anyway? I doubt it.
A: It's a usable enough pattern if there's a lot of things that need to be set on an object.
class Foo
{
int x, y, z;
public:
Foo &SetX(int x_) { x = x_; return *this; }
Foo &SetY(int y_) { y = y_; return *this; }
Foo &SetZ(int z_) { z = z_; return *this; }
};
int main()
{
Foo foo;
foo.SetX(1).SetY(2).SetZ(3);
}
This pattern replaces a constructor that takes three ints:
int main()
{
Foo foo(1, 2, 3); // Less self-explanatory than the above version.
}
It's useful if you have a number of values that don't always need to be set.
For reference, a more complete example of this sort of technique is refered to as the "Named Parameter Idiom" in the C++ FAQ Lite.
Of course, if you're using this for named parameters, you might want to take a look at boost::parameter. Or you might not...
A: You can return a reference to this if you want to chain setter function calls together like this:
obj.SetCount(10).SetName("Bob").SetColor(0x223344).SetWidth(35);
Personally I think that code is harder to read than the alternative:
obj.SetCount(10);
obj.SetName("Bob");
obj.SetColor(0x223344);
obj.SetWidth(35);
A: I would not think so. Typically, you think of 'setter' object as doing just that.
Besides, if you just set the object, dont you have a pointer to it anyway? | unknown | |
d334 | train | I think this is what you are looking for. I added some inline comments to explain what each step is doing. The end result should be all the contacts that can be read by a specified user in your org.
// add a set with all the contact ids in your org
List<contact> contacts = new List<contact>([Select id from Contact]);
Set<ID> contactids = new Set<ID>();
for(Contact c : contacts)
contactids.add(c.id);
// using the user record access you can query all the recordsids and the level of access for a specified user
List<UserRecordAccess> ura = new List<UserRecordAccess>([SELECT RecordId, HasReadAccess, HasTransferAccess, MaxAccessLevel
FROM UserRecordAccess
WHERE UserId = 'theuserid'
AND RecordId in: contactids
] );
// unfortunatelly you cannot agregate your query on hasReadAccess=true so you'd need to add this step
Set<id> readaccessID = new Set<ID>();
for(UserRecordAccess ur : ura)
{
if(ur.HasReadAccess==true)
{
readaccessID.add(ur.RecordID);
}
}
// This is the list of all the Contacts that can be read by the specified user
List<Contact> readAccessContact = new List<Contact>([Select id, name from contact where id in: readaccessID]);
// show the results
system.debug( readAccessContact); | unknown | |
d335 | train | You can try to light-weight load in main thread by
DispatchQueue.global().async {
UserDefaultsService.shared.updateDataSourceArrayWithWishlist(wishlist: self.wishList)
}
And instead of let dataSourceArray = UserDefaultsService.shared.getDataSourceArray() use self.wishList directly in the last line | unknown | |
d336 | train | @Wiktor Stribizew is right.
replace
[(\d)]
with
\(\d+\)
test it here: https://regex101.com/
A: I solve this problem, correct regexp is [ ][(][\d]*[)] | unknown | |
d337 | train | It is because the execution is stuck in the second infinite loop. The condition (len(Ai)+lenVariation > len(goal)*2 or len(Ai)+lenVariation<round(len(goal)*0.5)) is met every time after the first execution so the if statement is never evaluated to True and the while loop is never exited.
Also, note that your break statements only exist the for loop and not the while loop so statements after the second break are never executed. | unknown | |
d338 | train | Some of these are doable. Some, not so much. Let's tackle the low-hanging fruit first.
Text files
You can just wrap the content in <pre> tags after running it through htmlspecialchars.
PDF
There is no native way for PHP to turn a PDF document into HTML and images. Your best bet is probably ImageMagick, a common image manipulation program. You can basically call convert file.pdf file.png and it will convert the PDF file into a PNG image that you can then serve to the user. ImageMagick is installed on many Linux servers. If it's not available on your host's machine, please ask them to install it, most quality hosts shouldn't have a problem with this.
DOC & DOCX
We're getting a bit more tricky. Again, there's no way to do this in pure PHP. The Docvert extension looks like a possible choice, though it requires OpenOffice be installed as well. I was actually going to recommend plain vanilla OpenOffice/LibreOffice as well, because it can do the job directly from the command line. It's very unlikely that a shared host will want to install this. You'll probably need your own dedicated or virtual private server.
In the end, while these options can be made to work, the output quality is not guaranteeable. Overall, this is kind of a bad idea that you should not seriously consider implementing.
A: I am sure libraries and such exist that can do this. Google could probably help you there more than I can.
For txt files I would suggest breaking lines after a certain number of characters and putting them inside pre tags.
I know people will not be happy about this response, but if you are on a Linux environment and have pdf2html installed you could use shell_exec and call pdf2html.
Note: If you use shell_exec be wary of what you pass to it since it will be executed on the server outside of PHP.
A: I thought I'd just add that pdfs generally view well in a simple embed tag.
Or use an object so you can have fall backs if it cannot be displayed on the client. | unknown | |
d339 | train | This part of the code doesn't do anything:
rapidjson::StringBuffer strbuf;
rapidjson::Writer<rapidjson::StringBuffer> writer(strbuf);
md_FilesJsonDocument.Accept(writer);
strbuf contains the json string but it is discarded. I would move this into a separate function and print the conents with std::cout << strbuf;.
To write directly to a file:
std::ofstream ofs("out.json", std::ios::out);
if (ofs.is_open()) {
rapidjson::OStreamWrapper osw(ofs);
rapidjson::Writer<rapidjson::OStreamWrapper> writer(osw);
md_FilesJsonDocument.Accept(writer);
} | unknown | |
d340 | train | I dont see any code for adding the like buttons in your loop. So there is nothing to render.
Firstly you should configure your Javascript SDK and link it to your Facebook page / application.
To configure the Javascript SDK you will need to add something like
<script>
window.fbAsyncInit = function() {
FB.init({
appId : 'your-app-id',
xfbml : true,
version : 'v2.1'
});
};
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "//connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
</script>
This code is detailed here but links your website to your application and configures the SDK to look for Facebook social plugins on the page.
Then you need to add placeholder elements for the Javscript SDK to parse and render, like the below:
foreach ($sortedArray as &$filename) {
#echo '<br>' . $filename;
echo '<tr><td>';
echo '<a name="'.$filename.'" href="#'.$filename.'"><img src="'.$filename.'" /></a>';
?>
<div class="fb-like" data-href="<?php echo $imageUrl; ?>" data-layout="button" data-action="like" data-show-faces="true" data-share="false"></div>
<?php
echo substr($filename,strlen($folder),strpos($filename, '.')-strlen($folder));
echo '</td></tr>';
}
These divs have special attributes that the SDK will recognise and use to render the like buttons in the correct place.
You should read the documentation here. | unknown | |
d341 | train | This version of your script should return the entire contents of the page:
var page = require('webpage').create();
page.settings.userAgent = 'SpecialAgent';
page.open('http://www.httpuseragent.org', function (status) {
if (status !== 'success') {
console.log('Unable to access network');
} else {
var ua = page.evaluate(function () {
return document.getElementsByTagName('html')[0].outerHTML;
});
console.log(ua);
}
phantom.exit();
});
A: There are multiple ways to retrieve the page content as a string:
*
*page.content gives the complete source including the markup (<html>) and doctype (<!DOCTYPE html>),
*document.documentElement.outerHTML (via page.evaluate) gives the complete source including the markup (<html>), but without doctype,
*document.documentElement.textContent (via page.evaluate) gives the cumulative text content of the complete document including inline CSS & JavaScript, but without markup,
*document.documentElement.innerText (via page.evaluate) gives the cumulative text content of the complete document excluding inline CSS & JavaScript and without markup.
document.documentElement can be exchanged by an element or query of your choice.
A: To extract the text content of the page, you can try thisreturn document.body.textContent; but I'm not sure the result will be usable.
A: Having encountered this question while trying to solve a similar problem, I ended up adapting a solution from this question like so:
var fs = require('fs');
var file_h = fs.open('header.html', 'r');
var line = file_h.readLine();
var header = "";
while(!file_h.atEnd()) {
line = file_h.readLine();
header += line;
}
console.log(header);
file_h.close();
phantom.exit();
This gave me a string with the read-in HTML file that was sufficient for my purposes, and hopefully may help others who came across this.
The question seemed ambiguous (was it the entire content of the file required, or just the "text" aka Strings?) so this is one possible solution. | unknown | |
d342 | train | Well the obvious answer is that in some situations requests would take longer than 90 seconds for the worker process to return. If you can't imagine a situation where this would be appropriate, then feel free to lower it.
I wouldn't recommend going too much lower than 30 seconds. I can see situations where you get in recycle loops. However you can do testing and see what makes sense in your situation. I would recommend Siege for load testing to see how your application behaves. | unknown | |
d343 | train | You could try:
System.out.printf("Input an integer: ");
int a = in.nextInt();
int k = 0;
String str_a = "";
System.out.print(a);
while(a > 1)
{
if(a % 2 == 0)
a = a / 2;
else
a = 3 * a + 1;
str_a += ", " + String.valueOf(a);
k++;
}
System.out.println("k = " + k);
System.out.println("a = " + str_a); | unknown | |
d344 | train | You're never calling the scalarMultiply method.
A: You're never calling scalarMultiply and the number of the brackets is incorrect.
public class warm4{
public static void main(String[] args){
double[] array1 = {1,2,3,4};
double scale1 = 3;
scalarMultiply(array1, scale1);
}
public static void scalarMultiply(double[] array, double scale){
for( int i=0; i<array.length; i++){
array[i] = (array[i]) * scale;
System.out.print(array[i] + " ");
}
}
}
A: Your method is OK. But you must call it from your main:
public static void main(String[] args){
double[] array1 = {1,2,3,4};
double scale1 = 3;
scalarMultiply(array1, scale1);
for (int i = 0; i < array1.length; i++) {
System.out.println(array1[i]);
}
} | unknown | |
d345 | train | You can use wallet_switchEthereumChain method of RPC API of Metamask
Visit: https://docs.metamask.io/guide/rpc-api.html#wallet-switchethereumchain
A: const changeNetwork = async () => {
if (window.ethereum) {
try {
await window.ethereum.request({
method: 'wallet_switchEthereumChain',
params: [{ chainId: Web3.utils.toHex(chainId) }],
});
});
} catch (error) {
console.error(error);
}
}
changeNetwork()
A: What if the user doesn't have the required network added? Here is an expanded version which tries to switch, otherwise add the network to MetaMask:
const chainId = 137 // Polygon Mainnet
if (window.ethereum.networkVersion !== chainId) {
try {
await window.ethereum.request({
method: 'wallet_switchEthereumChain',
params: [{ chainId: web3.utils.toHex(chainId) }]
});
} catch (err) {
// This error code indicates that the chain has not been added to MetaMask
if (err.code === 4902) {
await window.ethereum.request({
method: 'wallet_addEthereumChain',
params: [
{
chainName: 'Polygon Mainnet',
chainId: web3.utils.toHex(chainId),
nativeCurrency: { name: 'MATIC', decimals: 18, symbol: 'MATIC' },
rpcUrls: ['https://polygon-rpc.com/']
}
]
});
}
}
}
A: export async function switchToNetwork({
library,
chainId,
}: SwitchNetworkArguments): Promise<null | void> {
if (!library?.provider?.request) {
return
}
const formattedChainId = hexStripZeros(
BigNumber.from(chainId).toHexString(),
)
try {
await library.provider.request({
method: 'wallet_switchEthereumChain',
params: [{ chainId: formattedChainId }],
})
} catch (error) {
// 4902 is the error code for attempting to switch to an unrecognized chainId
// eslint-disable-next-line @typescript-eslint/no-explicit-any
if ((error as any).code === 4902) {
const info = CHAIN_INFO[chainId]
await library.provider.request({
method: 'wallet_addEthereumChain',
params: [
{
chainId: formattedChainId,
chainName: info.label,
rpcUrls: [info.addNetworkInfo.rpcUrl],
nativeCurrency: info.addNetworkInfo.nativeCurrency,
blockExplorerUrls: [info.explorer],
},
],
})
// metamask (only known implementer) automatically switches after a network is added
// the second call is done here because that behavior is not a part of the spec and cannot be relied upon in the future
// metamask's behavior when switching to the current network is just to return null (a no-op)
try {
await library.provider.request({
method: 'wallet_switchEthereumChain',
params: [{ chainId: formattedChainId }],
})
} catch (error) {
console.debug(
'Added network but could not switch chains',
error,
)
}
} else {
throw error
}
}
} | unknown | |
d346 | train | Think this answer seems to be similar to your question.Hope it provides some insight.
Time Binding issue in Bootstrap timepicker | unknown | |
d347 | train | If the registration is succesful you can simply push the email and password variables to firebase. See code below.
function createUser(email, password, username) {
ref.createUser({
email: email,
password: password
}, function(error) {
if (error === null) {
... Registration successful
$activityIndicator.stopAnimating();
$scope.padding_error = null;
$scope.error = null;
##NEWCODE HERE##
emailRef = new Firebase("<YOURFIREBASEURL>/accounts/"+username+"/email")
passRef = new Firebase("<YOURFIREBASEURL>/accounts/"+username+"/password")
emailRef.set(email)
passRef.set(password)
logUserIn(email, password);
} else {
... Something went wrong at registration
}
}
});
} | unknown | |
d348 | train | You can .map over all array entries and then use .reduce on the Object.values of each array entry to sum the values:
let data = [
{
"cost one": "118",
"cost two": "118",
"cost three": "118"
},
{
"cost one": "118",
"cost two": "111",
"cost three": "118"
},
{
"cost one": "120",
"cost two": "118",
"cost three": "118"
}
];
function sumValues(objArr) {
return objArr.map(curr => {
return Object.values(curr).reduce((prev, val) => prev += Number(val), 0)
});
}
console.log(sumValues(data)); | unknown | |
d349 | train | IDEA is using its own method of instrumenting bytecode to add such validations. For command line builds we provide javac2 Ant task that does the instrumentation (extends standard javac task). If you generate Ant build from IDEA, you will have an option to use javac2.
We don't provide similar Maven plug-in yet, but there is third-party version which may work for you (though, it seems to be a bit old).
A: I'd go the AOP way:
First of all you need a javax.validation compatible validator (Hibernate Validator is the reference implementation).
Now create an aspectj aspect that has a Validator instance and checks all method parameters for validation errors. Here is a quick version to get you started:
public aspect ValidationAspect {
private final Validator validator;
{
final ValidatorFactory factory = Validation.buildDefaultValidatorFactory();
validator = factory.getValidator();
}
pointcut serviceMethod() : execution(public * com.yourcompany**.*(..));
before() : serviceMethod(){
final Method method = (Method) thisJoinPoint.getTarget();
for(final Object arg : thisJoinPoint.getArgs()){
if(arg!=null) validateArg(arg,method);
}
}
private void validateArg(final Object arg, final Method method) {
final Set<ConstraintViolation<Object>> validationErrors = validator.validate(arg);
if(!validationErrors.isEmpty()){
final StringBuilder sb = new StringBuilder();
sb.append("Validation Errors in method ").append(method).append(":\n");
for (final ConstraintViolation<Object> constraintViolation : validationErrors) {
sb.append(" - ").append(constraintViolation.getMessage()).append("\n");
}
throw new RuntimeException(sb.toString());
}
}
}
Use the aspectj-maven-plugin to weave that aspect into your test and / or production code.
If you only want this functionality for testing, you might put the aspectj-plugin execution in a profile.
A: There is a maven plugin closely affiliated with the IntelliJ functionality, currently at https://github.com/osundblad/intellij-annotations-instrumenter-maven-plugin. It is discussed under the IDEA-31368 ticket first mentioned in CrazyCoder's answer.
A: You can do annotation validation in your JUnit tests.
import java.util.Set;
import javax.validation.ConstraintViolation;
import junit.framework.Assert;
import org.hibernate.validator.HibernateValidator;
import org.junit.Before;
import org.junit.Test;
import org.springframework.validation.beanvalidation.LocalValidatorFactoryBean;
public class Temp {
private LocalValidatorFactoryBean localValidatorFactory;
@Before
public void setup() {
localValidatorFactory = new LocalValidatorFactoryBean();
localValidatorFactory.setProviderClass(HibernateValidator.class);
localValidatorFactory.afterPropertiesSet();
}
@Test
public void testLongNameWithInvalidCharCausesValidationError() {
final ProductModel productModel = new ProductModel();
productModel.setLongName("A long name with\t a Tab character");
Set<ConstraintViolation<ProductModel>> constraintViolations = localValidatorFactory.validate(productModel);
Assert.assertTrue("Expected validation error not found", constraintViolations.size() == 1);
}
}
If your poison is Spring, take a look at these Spring Unit Tests | unknown | |
d350 | train | You can use Uncorelated sub queries in $lookup
*
*$match to get the "notifications.sms": true
*$lookupto join two collections. We are assigning uId = _id from USER collection. Inside the pipeline, we use $match to find the active :true, and _id=uId
here is the script
db.USER.aggregate([
{
"$match": {
"notifications.sms": true
}
},
{
"$lookup": {
"from": "ALERT",
"let": {
uId: "$_id"
},
"pipeline": [
{
$match: {
$and: [
{
active: true
},
{
$expr: {
$eq: [
"$user_id",
"$$uId"
]
}
}
]
}
}
],
"as": "joinAlert"
}
}
])
Working Mongo playground | unknown | |
d351 | train | I've done the first half of this before, so we'll start there (convenient, no?). Without knowing to much about your needs I'd recommend the following as a base (you can adjust the column widths as needed):
CREATE TABLE tree (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
parent_id INT UNSIGNED NOT NULL DEFAULT 0,
type VARCHAR(20) NOT NULL,
name VARCHAR(32) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY (parent_id, type, name),
KEY (parent_id)
);
Why did I do it this way? Well, let's go through each field. id is a globally unique value that we can use to identify this element and all the elements that directly depend on it. parent_id lets us go back up through the tree until we reach parent_id == 0, which is the top of the tree. type would be your "car" or "vent" descriptions. name would let you qualify type, so things like "Camry" and "Driver Left" (for "vent" obviously).
The data would be stored with values like these:
INSERT INTO tree (parent_id, type, name) VALUES
(0, 'car', 'Camry'),
(1, 'hvac', 'HVAC'),
(2, 'vent', 'Driver Front Footwell'),
(2, 'vent', 'Passenger Front Footwell'),
(2, 'vent', 'Driver Rear Footwell'),
(2, 'vent', 'Passenger Rear Footwell'),
(1, 'glass', 'Glass'),
(7, 'window', 'Windshield'),
(7, 'window', 'Rear Window'),
(7, 'window', 'Driver Front Window'),
(7, 'window', 'Passenger Front Window'),
(7, 'window', 'Driver Rear Window'),
(7, 'window', 'Passenger Rear Window'),
(1, 'mirrors', 'Mirrors'),
(14, 'mirror', 'Rearview Mirror'),
(14, 'mirror', 'Driver Mirror'),
(14, 'mirror', 'Passenger Mirror');
I could keep going, but I think you get the idea. Just to be sure though... All those values would result in a tree that looked like this:
(1, 0, 'car', 'Camry')
| (2, 1, 'hvac', 'HVAC')
| +- (3, 2, 'vent', 'Driver Front Footwell')
| +- (4, 2, 'vent', 'Passenger Front Footwell')
| +- (5, 2, 'vent', 'Driver Rear Footwell')
| +- (6, 2, 'vent', 'Passenger Rear Footwell')
+- (7, 1, 'glass', 'Glass')
| +- (8, 7, 'window', 'Windshield')
| +- (9, 7, 'window', 'Rear Window')
| +- (10, 7, 'window', 'Driver Front Window')
| +- (11, 7, 'window', 'Passenger Front Window')
| +- (12, 7, 'window', 'Driver Rear Window')
| +- (13, 7, 'window', 'Passenger Rear Window')
+- (14, 1, 'mirrors', 'Mirrors')
+- (15, 14, 'mirror', 'Rearview Mirror')
+- (16, 14, 'mirror', 'Driver Mirror')
+- (17, 14, 'mirror', 'Passenger Mirror')
Now then, the hard part: copying the tree. Because of the parent_id references we can't do something like an INSERT INTO ... SELECT; we're reduced to having to use a recursive function. I know, we're entering The Dirty place. I'm going to pseudo-code this since you didn't note which language you're working with.
FUNCTION copyTreeByID (INTEGER id, INTEGER max_depth = 10, INTEGER parent_id = 0)
row = MYSQL_QUERY_ROW ("SELECT * FROM tree WHERE id=?", id)
IF NOT row
THEN
RETURN NULL
END IF
IF ! MYSQL_QUERY ("INSERT INTO trees (parent_id, type, name) VALUES (?, ?, ?)", parent_id, row["type"], row["name"])
THEN
RETURN NULL
END IF
parent_id = MYSQL_LAST_INSERT_ID ()
IF max_depth LESSTHAN 0
THEN
RETURN
END IF
rows = MYSQL_QUERY_ROWS ("SELECT id FROM trees WHERE parent_id=?", id)
FOR rows AS row
copyTreeByID (row["id"], max_depth - 1, parent_id)
END FOR
RETURN parent_id
END FUNCTION
FUNCTION copyTreeByTypeName (STRING type, STRING name)
row = MYSQL_QUERY_ROW ("SELECT id FROM tree WHERE parent_id=0 AND type=? AND name=?", type, name)
IF NOT ARRAY_LENGTH (row)
THEN
RETURN
END IF
RETURN copyTreeByID (row["id"])
END FUNCTION
copyTreeByTypeName looks up the tree ID for the matching type and name and passes it to copyTreeByID. This is mostly a utility function to help you copy stuff by type/name.
copyTreeByID is the real beast. Fear it because it is recursive and evil. Why is it recursive? Because your trees are not predictable and can be any depth. But it's okay, we've got a variable to track depth and limit it (max_depth). So let's walk through it.
Start by grabbing all the data for the element. If we didn't get any data, just return. Re-insert the data with the element's type and name, and the passed parent_id. If the query fails, return. Set the parent_id to the last insert ID so we can pass it along later. Check for max_depth being less than zero, which indicates we've reached max depth; if we have return. Grab all the elements from the tree that have a parent of id. Then for each of those elements recurse into copyTreeByID passing the element's id, max_depth minus 1, and the new parent_id. At the end return parent_id so you can access the new copy of the elements.
Make sense? (I read it back and it made sense, not that that means anything). | unknown | |
d352 | train | Trying to modify the standard keyboard requires taking a dangerous path into private APIs and a broken app in future iOS versions.
I think the best solution for you would be to implement the textField:shouldChangeCharactersInRange:replacementString: method of UITextFieldDelegate and replace whitespace characters with the empty string.
Once this is implemented, hitting the space bar will simply do nothing. | unknown | |
d353 | train | autoit may work. i'd use python PIL. i can specify font, convert it to a layer and overlay on top of preexisting image.
EDIT
actually imagemagick can be easier than PIL http://www.imagemagick.org/Usage/text/
A: Should not be much of a problem if you have Python and the Python Imaging Library (PIL) installed:
from PIL import Image, ImageFont, ImageDraw
BACKGROUND = '/path/to/background.png'
OUTPUT = '/path/to/mypicture_{0:04d}.png'
START = 0
STOP = 9999
# Create a font object from a True-Type font file and specify the font size.
fontobj = ImageFont.truetype('/path/to/font/arial.ttf', 24)
for i in range(START, STOP + 1):
img = Image.open(BACKGROUND)
draw = ImageDraw.Draw(img)
# Write a text over the background image.
# Parameters: location(x, y), text, textcolor(R, G, B), fontobject
draw.text((0, 0), '{0:04d}'.format(i), (255, 0, 0), font=fontobj)
img.save(OUTPUT.format(i))
print 'Script done!'
Please consult the PIL manual for other ways of creating font objects for other font formats | unknown | |
d354 | train | Saw this in the source and something clicked in my head.
Changing the filter method above to the following gave me the desired results.
def filter(keys)
if (scope == object) or scope.has_role?(:super)
keys
else
keys - [:auth_token]
end
end
Hope this helps anyone else using version 0.9.x | unknown | |
d355 | train | Put the valid names into a text file (i.e. "ValidNames.txt") and use findstr with the /G option.
02-TestFile.xlsx
05-TestFile.xlsx
10-TestFile.xlsx
...
@echo off
for /f "delims=" %%a in ('
dir /b *.xlsx ^| findstr /vxlg:"ValidNames.txt"
') do move "%%a" "C:\Temp\Archive\Error" | unknown | |
d356 | train | The process.stdout and process.stderr pipes are independent of whatever actual code you're running using Node, so if you want their output sent to files, then make your main entry point script capture stdout/stderr output and that's simply what it'll do for as long as Node.js runs that script.
You can add log writing yourself by tapping into process.stdout.on(`data`, data => ...) (and stderr equivalent), or you can pipe their output to a file, or (because why reinvent the wheel?) you can find a logging solution that does that for you, but then that's on you to find, asking others to recommend you one is off topic on Stackoverflow.
Also note that stdout/stderr have some sync/async quirks, so give https://nodejs.org/api/process.html#process_a_note_on_process_i_o a read because that has important information for you to be aware of.
A: The example from winston basically solved the issue. | unknown | |
d357 | train | You can use this template to get required counts.
<xsl:template match="lst/arr/lst">
<ns:reply>
<ns:party-name>
<xsl:value-of select="str[@name='value']"/>
</ns:party-name>
<ns:shipments-count>
<xsl:value-of select="int[@name='count']" />
</ns:shipments-count>
<ns:no-entry-or-line-release-count>
<xsl:value-of select ="count(arr[@name='pivot']/lst[count(str[@name='value'])=0]/arr[@name='pivot']/lst/int[@name='count'])"> </xsl:value-of>
</ns:no-entry-or-line-release-count>
</ns:reply>
</xsl:template> | unknown | |
d358 | train | It turns out the 'table' I was pulling from was in fact a database view, a sort of pseudo-table, which is composed of sql joining together other tables.
The error actually lay in the view, rather than in my SQL, which is where the subquery referred to in the error was. Thanks for the help in the comments! | unknown | |
d359 | train | The simplest way would be to create a function with the code you want to execute after the execution of the request, and pass this function in parameter of the getfile function :
getFile : function( fileName, success ) {
var me = this;
me.db.transaction( function( tx ) {
tx.executeSql( "SELECT * FROM content WHERE fileName = '" + fileName + "'", [ ], me.onSuccess, me.onError );
},
success
);
// somehow return results as empty array or array with object
// I know results need to be transformed
});
var r = getFile(
name,
function() {
var r = getFile( name );
if ( r.length > 0 ) {
// use it
}
else {
// make AJAX call and store it
}
}
);
Otherwise, the best way to perform is to use Promises to resolve asynchronous issues but you'll have to use a library like JQuery.
http://joseoncode.com/2011/09/26/a-walkthrough-jquery-deferred-and-promise/ | unknown | |
d360 | train | You're doing
SELECT pram = (…) FROM dbo.ClassRelationship a …;
where (…) is an expression that is evaluated and then compared to the current value of pram (which was initialised to an empty string). The query does nothing else, there is no destination for this boolean value (comparison result) it computes, you're getting an error.
You most likely meant to either perform an assignment
pram = SELECT (…) FROM dbo.ClassRelationship a …;
or use an INTO clause:
SELECT (…) INTO pram FROM dbo.ClassRelationship a …;
Notice that you don't even need pl/pgsql to do this. A plain sql function would do as well:
CREATE OR REPLACE FUNCTION dbo.fnRepID(pram_ID BIGINT) RETURNS varchar
LANGUAGE sql
STABLE
RETURN (
SELECT
'' ||
(CASE COALESCE(a.Name, '') WHEN '' THEN '' ELSE b.Name || ' - ' END) ||
(CASE COALESCE(b.Name, '' ) WHEN '' THEN '' ELSE b.Name || ' - ' END) ||
f.NAME ||
';'
FROM dbo.ClassRelationship a
LEFT JOIN dbo.ClassRelationship b ON a.ParentClassID = b.ClassID AND b.Type = 2 AND a.Type = 1
); | unknown | |
d361 | train | If you want to use your url params in your state everytime, you can use the resolve function:
.state('edit', {
url: '/editItem/:id/:userId',
templateUrl: 'app/items/edit.html',
controller: 'editController',
controllerAs: 'vm',
resolve: {
testObject: function($stateParams) {
return {
id: $stateParams.id,
userId: $stateParams.userId
}
}
}
})
Now, you can pass testObject as a dependency to your editController and every time this route is resolved, the values will be available within your controller as testObject.id and testObject.userId
If you want to pass an object from one state to the next, use $state.go programatically:
$state.go('myState', {myParam: {some: 'thing'}})
$stateProvider.state('myState', {
url: '/myState',
params: {myParam: null}, ...
The only other option is to cache, trough Localstorage or cookies | unknown | |
d362 | train | I wouldn't know what could be going wrong, but I do know an easy solution could be creating a global array and then setting the property to the global array.
Code:
var array = [ your array] ;
var cc_cd = {
List : array,
Other properties
};
Please mark answered or vote to let me know if this helped! | unknown | |
d363 | train | Try this:
object.visible = false; //Invisible
object.visible = true; //Visible
A: simply use the object traverse method to hide the mesh in three.js.
In my code hide the object based on its name
object.traverse ( function (child) {
if (child instanceof THREE.Mesh) {
child.visible = true;
}
});
Here is the working sample for Object show/hide option
http://jsfiddle.net/ddbTy/287/
I think it should be helpful,.. | unknown | |
d364 | train | Not directly, no. Unless it's in the browser's UA, there's no way of detecting it without some kind of plugin.
A: If you can use VBSCRIPT you can get what you are looking for.
The WMI class Win32_OperatingSystem has the properties ServicePackMajorVersion, ServicePackMinorVersion, Name and Version.
Try samples here: WMI Tasks
Hope this can help | unknown | |
d365 | train | how come I can still add content to the file such as shown here Android saving Bitmap to SD card.
That code creates a new file after deleting the old one.
So how do I delete a file so that it is completely gone? So that when someone go look through file manager, the file is no longer there?
Call delete() on a File object that points to the file. Then, do not use that same File object to write to the file again, thereby creating a new file, as the code that you link to does. | unknown | |
d366 | train | Apparently the source code is correct, but there seem to be problems with the database:
The table, corresponding with Class1, contains a column voa_class. The content of that column should be <NameSpace_of_Class1>.Class1. In case there's something else, like <Whatever_NameSpace>.<AnotherClass> or <AnotherNameSpace>.Class1 (like in my case), the mentioned exception gets generated. | unknown | |
d367 | train | Since you already have the data in RAM, grouping in PHP seems more than reasonable, since it takes not a lot of processing.
You might want to try
$item_info_tmp=array();
foreach ($item_info as $ii) {
if (!isset($item_info_tmp[$ii['folder_id']]))
$item_info_tmp[$ii['folder_id']]=array();
$item_info_tmp[$ii['folder_id']][]=$ii;
}
$item_info=array_values($item_info_tmp); | unknown | |
d368 | train | Try this simply
.HTMLBody = "<table><td style='width:" & tblWidth & "px; color:#4d4d4d; height=2px;'></td></table>"
A: To use a stylesheet instead:
Just create one using a string and include it in your HTMLBody
Dim sStyleSheet as String
sStyleSheet = "<style> td {width:500px;} </style>"
or to include your variable
sStyleSheet = "<style> td {width:" & tblWidth & "px;} </style>"
See how you are just building a string?
Then include it in the HTML:
sHTML = "<table><tr><td> Hello World </td></tr></table>"
sStyleSheet = "<style> td {width:" & tblWidth & "px;} </style>"
.HTMLBody = sStyleSheet & sHTML
Make sense? | unknown | |
d369 | train | That's because (as listed in the documentation) the VALUE() function has not yet been implemented in the PHPExcel calculation engine | unknown | |
d370 | train | Error: startTime contains string values but got a date (Code: 102,
Version: 1.2.21)
The error clearly indicates that you are comparing two different objects one is string and second one is date. So there are two things you can either convert any one into date or string. So to implement the same in a easy way, you can write one category with function which will convert either into date or string and use that method to perform the comparison.
A: You've converted leftDate and arrivedDate to leftDateString and arrivedDateString, but you're still using leftDate and arrivedDate in your query. I think you meant to write:
PFQuery *query = [PFQuery queryWithClassName:@"PassData"];
[query whereKey:@"startTime" greaterThan:leftDateString];
[query whereKey:@"timeArrived" lessThan:arrivedDateString];
in which case you'd no longer get the error since you'd be comparing string to string.
Although I generally recommend that you store and sort your dates with NSDate objects, in this case where your format is in the same descending order of importance as a typical NSDate sort of month, day, hour, then minute, i.e. "MM-dd hh:mm", as long as year or seconds don't matter to you and as long as the queried time format matches the database time format, this query should work since greaterThan and lessThan will compare the string objects alphabetically/numerically.
A: I guess for that to work you have to have Date fields in database, then you pass NSDate to whereKey: on iOS. | unknown | |
d371 | train | Grant usage/select to a single table
If you only grant CONNECT to a database, the user can connect but has no other privileges. You have to grant USAGE on namespaces (schemas) and SELECT on tables and views individually like so:
GRANT CONNECT ON DATABASE mydb TO xxx;
-- This assumes you're actually connected to mydb..
GRANT USAGE ON SCHEMA public TO xxx;
GRANT SELECT ON mytable TO xxx;
Multiple tables/views (PostgreSQL 9.0+)
In the latest versions of PostgreSQL, you can grant permissions on all tables/views/etc in the schema using a single command rather than having to type them one by one:
GRANT SELECT ON ALL TABLES IN SCHEMA public TO xxx;
This only affects tables that have already been created. More powerfully, you can automatically have default roles assigned to new objects in future:
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO xxx;
Note that by default this will only affect objects (tables) created by the user that issued this command: although it can also be set on any role that the issuing user is a member of. However, you don't pick up default privileges for all roles you're a member of when creating new objects... so there's still some faffing around. If you adopt the approach that a database has an owning role, and schema changes are performed as that owning role, then you should assign default privileges to that owning role. IMHO this is all a bit confusing and you may need to experiment to come up with a functional workflow.
Multiple tables/views (PostgreSQL versions before 9.0)
To avoid errors in lengthy, multi-table changes, it is recommended to use the following 'automatic' process to generate the required GRANT SELECT to each table/view:
SELECT 'GRANT SELECT ON ' || relname || ' TO xxx;'
FROM pg_class JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace
WHERE nspname = 'public' AND relkind IN ('r', 'v', 'S');
This should output the relevant GRANT commands to GRANT SELECT on all tables, views, and sequences in public, for copy-n-paste love. Naturally, this will only be applied to tables that have already been created.
A: From PostgreSQL v14 on, you can do that simply by granting the predefined pg_read_all_data role:
GRANT pg_read_all_data TO xxx;
A: Do note that PostgreSQL 9.0 (today in beta testing) will have a simple way to do that:
test=> GRANT SELECT ON ALL TABLES IN SCHEMA public TO joeuser;
A: If your database is in the public schema, it is easy (this assumes you have already created the readonlyuser)
db=> GRANT SELECT ON ALL TABLES IN SCHEMA public to readonlyuser;
GRANT
db=> GRANT CONNECT ON DATABASE mydatabase to readonlyuser;
GRANT
db=> GRANT SELECT ON ALL SEQUENCES IN SCHEMA public to readonlyuser;
GRANT
If your database is using customschema, execute the above but add one more command:
db=> ALTER USER readonlyuser SET search_path=customschema, public;
ALTER ROLE
A: The not straightforward way of doing it would be granting select on each table of the database:
postgres=# grant select on db_name.table_name to read_only_user;
You could automate that by generating your grant statements from the database metadata.
A: Here is the best way I've found to add read-only users (using PostgreSQL 9.0 or newer):
$ sudo -upostgres psql postgres
postgres=# CREATE ROLE readonly WITH LOGIN ENCRYPTED PASSWORD '<USE_A_NICE_STRONG_PASSWORD_PLEASE';
postgres=# GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
Then log in to all related machines (master + read-slave(s)/hot-standby(s), etc..) and run:
$ echo "hostssl <PUT_DBNAME_HERE> <PUT_READONLY_USERNAME_HERE> 0.0.0.0/0 md5" | sudo tee -a /etc/postgresql/9.2/main/pg_hba.conf
$ sudo service postgresql reload
A: By default new users will have permission to create tables. If you are planning to create a read-only user, this is probably not what you want.
To create a true read-only user with PostgreSQL 9.0+, run the following steps:
# This will prevent default users from creating tables
REVOKE CREATE ON SCHEMA public FROM public;
# If you want to grant a write user permission to create tables
# note that superusers will always be able to create tables anyway
GRANT CREATE ON SCHEMA public to writeuser;
# Now create the read-only user
CREATE ROLE readonlyuser WITH LOGIN ENCRYPTED PASSWORD 'strongpassword';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonlyuser;
If your read-only user doesn't have permission to list tables (i.e. \d returns no results), it's probably because you don't have USAGE permissions for the schema. USAGE is a permission that allows users to actually use the permissions they have been assigned. What's the point of this? I'm not sure. To fix:
# You can either grant USAGE to everyone
GRANT USAGE ON SCHEMA public TO public;
# Or grant it just to your read only user
GRANT USAGE ON SCHEMA public TO readonlyuser;
A: Reference taken from this blog:
Script to Create Read-Only user:
CREATE ROLE Read_Only_User WITH LOGIN PASSWORD 'Test1234'
NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION VALID UNTIL 'infinity';
\connect YourDatabaseName;
Assign permission to this read-only user:
GRANT CONNECT ON DATABASE YourDatabaseName TO Read_Only_User;
GRANT USAGE ON SCHEMA public TO Read_Only_User;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO Read_Only_User;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO Read_Only_User;
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
Assign permissions to read all newly tables created in the future
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO Read_Only_User;
A: I’ve created a convenient script for that; pg_grant_read_to_db.sh. This script grants read-only privileges to a specified role on all tables, views and sequences in a database schema and sets them as default.
A: I read trough all the possible solutions, which are all fine, if you remember to connect to the database before you grant the things ;) Thanks anyway to all other solutions!!!
user@server:~$ sudo su - postgres
create psql user:
postgres@server:~$ createuser --interactive
Enter name of role to add: readonly
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n
start psql cli and set a password for the created user:
postgres@server:~$ psql
psql (10.6 (Ubuntu 10.6-0ubuntu0.18.04.1), server 9.5.14)
Type "help" for help.
postgres=# alter user readonly with password 'readonly';
ALTER ROLE
connect to the target database:
postgres=# \c target_database
psql (10.6 (Ubuntu 10.6-0ubuntu0.18.04.1), server 9.5.14)
You are now connected to database "target_database" as user "postgres".
grant all the needed privileges:
target_database=# GRANT CONNECT ON DATABASE target_database TO readonly;
GRANT
target_database=# GRANT USAGE ON SCHEMA public TO readonly ;
GRANT
target_database=# GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly ;
GRANT
alter default privileges for targets db public shema:
target_database=# ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly;
ALTER DEFAULT PRIVILEGES
A: Taken from a link posted in response to despesz' link.
Postgres 9.x appears to have the capability to do what is requested. See the Grant On Database Objects paragraph of:
http://www.postgresql.org/docs/current/interactive/sql-grant.html
Where it says: "There is also an option to grant privileges on all objects of the same type within one or more schemas. This functionality is currently supported only for tables, sequences, and functions (but note that ALL TABLES is considered to include views and foreign tables)."
This page also discusses use of ROLEs and a PRIVILEGE called "ALL PRIVILEGES".
Also present is information about how GRANT functionalities compare to SQL standards.
A: CREATE USER username SUPERUSER password 'userpass';
ALTER USER username set default_transaction_read_only = on; | unknown | |
d372 | train | You could debug the action, then look what exception gets thrown.
Then you can easily try-catch this line of code and if if fails, you give something different then 500 back.
try
{
return //...;
}
catch (//your Exception)
{
return //... As Example BadRequest or something different
}
Hope it helps. | unknown | |
d373 | train | I would probably write the threshold function the following way, taking advantage of the Timestamp combinator.
public static IObservable<U> TimeLimitedThreshold
<T,U>
( this IObservable<T> source
, int count
, TimeSpan timeSpan
, Func<IList<T>,U> selector
, IScheduler scheduler = null
)
{
var tmp = scheduler == null
? source.Timestamp()
: source.Timestamp(scheduler);
return tmp
.Buffer(count, 1).Where(b=>b.Count==count)
.Select(b => new { b, span = b.Last().Timestamp - b.First().Timestamp })
.Where(o => o.span <= timeSpan)
.Select(o => selector(o.b.Select(ts=>ts.Value).ToList()));
}
As an added convenience when the trigger is fired the complete buffer that satisfies the trigger is provided to your selector function.
For example
var keys = KeyPresses().ToObservable(Scheduler.Default).Publish().RefCount();
IObservable<string> fastKeySequences = keys.TimeLimitedThreshHold
( 3
, TimeSpan.FromSeconds(5)
, keys => String.Join("", keys)
);
The extra IScheduler parameter is given as the Timestamp method has an extra overload which takes one. This might be useful if you want to have a custom scheduler which doesn't track time according to the internal clock. For testing purposes using an historical scheduler can be useful and then you would need the extra overload.
and here is a fully working test showing the use of a schedular. ( using XUnit and FluentAssertions for the Should().Be(..) )
public class TimeLimitedThresholdSpec : ReactiveTest
{
TestScheduler _Scheduler = new TestScheduler();
[Fact]
public void ShouldWork()
{
var o = _Scheduler.CreateColdObservable
( OnNext(100, "A")
, OnNext(200, "B")
, OnNext(250, "C")
, OnNext(255, "D")
, OnNext(258, "E")
, OnNext(600, "F")
);
var fixture = o
.TimeLimitedThreshold
(3
, TimeSpan.FromTicks(20)
, b => String.Join("", b)
, _Scheduler
);
var actual = _Scheduler
.Start(()=>fixture, created:0, subscribed:1, disposed:1000);
actual.Messages.Count.Should().Be(1);
actual.Messages[0].Value.Value.Should().Be("CDE");
}
}
Subscribing and is the following way
IDisposable subscription = fastKeySequences.Subscribe(s=>Console.WriteLine(s));
and when you want to cancel the subscription ( clean up memory and resources ) you dispose of the subscription. Simply.
subscription.Dispose()
A: Here's an alternative approach that uses a single delay in favour of buffers and timers. It doesn't give you the events - it just signals when there is a violation - but it uses less memory as it doesn't hold on to too much.
public static class ObservableExtensions
{
public static IObservable<Unit> TimeLimitedThreshold<TSource>(
this IObservable<TSource> source,
long threshold,
TimeSpan timeLimit,
IScheduler s)
{
var events = source.Publish().RefCount();
var count = events.Select(_ => 1)
.Merge(events.Select(_ => -1)
.Delay(timeLimit, s));
return count.Scan((x,y) => x + y)
.Where(c => c == threshold)
.Select(_ => Unit.Default);
}
}
The Publish().RefCount() is used to avoid subscribing to the source more than one. The query projects all events to 1, and a delayed stream of events to -1, then produces a running total. If the running total reaches the threshold, we emit a signal (Unit.Default is the Rx type to represent an event without a payload). Here's a test (just runs in LINQPad with nuget rx-testing):
void Main()
{
var s = new TestScheduler();
var source = s.CreateColdObservable(
new Recorded<Notification<int>>(100, Notification.CreateOnNext(1)),
new Recorded<Notification<int>>(200, Notification.CreateOnNext(2)),
new Recorded<Notification<int>>(300, Notification.CreateOnNext(3)),
new Recorded<Notification<int>>(330, Notification.CreateOnNext(4)));
var results = s.CreateObserver<Unit>();
source.TimeLimitedThreshold(
2,
TimeSpan.FromTicks(30),
s).Subscribe(results);
s.Start();
ReactiveAssert.AssertEqual(
results.Messages,
new List<Recorded<Notification<Unit>>> {
new Recorded<Notification<Unit>>(
330, Notification.CreateOnNext(Unit.Default))
});
}
Edit
After Matthew Finlay's observation that the above would also fire as the threshold is passed "on the way down", I added this version that checks only for threshold crossing in the positive direction:
public static class ObservableExtensions
{
public static IObservable<Unit> TimeLimitedThreshold<TSource>(
this IObservable<TSource> source,
long threshold,
TimeSpan timeLimit,
IScheduler s)
{
var events = source.Publish().RefCount();
var count = events.Select(_ => 1)
.Merge(events.Select(_ => -1)
.Delay(timeLimit, s));
return count.Scan((x,y) => x + y)
.Scan(new { Current = 0, Last = 0},
(x,y) => new { Current = y, Last = x.Current })
.Where(c => c.Current == threshold && c.Last < threshold)
.Select(_ => Unit.Default);
}
} | unknown | |
d374 | train | $(document).ready(function(){
if (location.hash) {
$('a[href=' + location.hash + ']').tab('show');
}
});
this is the solution i found it here
here | unknown | |
d375 | train | It looks like you init your manifest on the incorrect version of AOSP. See Downloading the Source for a good explanation of what you need to do to setup AOSP.
The main part from there that you want though, is:
repo init -u https://android.googlesource.com/platform/manifest -b android-4.0.1_r1
Which would init your repository on AOSP version 4.0.1_r1. If you want to init on the l-preview branch, it would be:
repo init -u https://android.googlesource.com/platform/manifest -b l-preview
Just keep in mind this isn't actually android l's source code, its a GPL update. I am not sure if they have moved it over to using java version 1.7 yet, as I have not personally tried. | unknown | |
d376 | train | Try running the library(caret) again, if the package is loaded, createDataPartition is there. If you still face the issue, check for Caret updates. | unknown | |
d377 | train | you should fecth a row (at least)
if (mysqli_connect_errno()) {
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
$result = mysqli_query($conn,
"SELECT sum(SumofNoOfProjects) as sum_projects, sum(SumofTotalBudgetValue) as sum_value
FROM `meed`
WHERE Countries = '$countries'");
while( $row=mysqli_fetch_array($result,MYSQLI_ASSOC);) {
echo json_encode([ $row['sum_projects'], $row['sum_value'] ] );
exit;
}
for the multiple countries
Asuming you $_POST['countries'] contains "'Egypt','Algerie'"
then you could use a query as
"SELECT sum(SumofNoOfProjects) as sum_projects, sum(SumofTotalBudgetValue) as sum_value
FROM `meed`
WHERE Countries IN (" . $_POST['countries'] . ");" | unknown | |
d378 | train | Live Demo
std::string line;
// get input from cin stream
if (std::getline(cin, line)) // check for success
{
std::vector<std::string> words;
std::string word;
// The simplest way to split our line with a ' ' delimiter is using istreamstring + getline
std::istringstream stream;
stream.str(line);
// Split line into words and insert them into our vector "words"
while (std::getline(stream, word, ' '))
words.push_back(word);
if (words.size() % 2 != 0) // if word count is not even, print error.
std::cout << "Word count not even " << words.size() << " for string: " << line;
else
{
//Remove the last word from the vector to make it odd
words.pop_back();
std::cout << "Original: " << line << endl;
std::cout << "New:";
for (std::string& w : words)
cout << " " << w;
}
}
A: You could write something like this
int count = -1;
for (auto it =input.begin();it!=input.end();){
if(*it==' '){
count++;it++;
if (count%2==0){
while (it != input.end()){
if (*it==' ')break;
it=input.erase (it);
}
}else it++;
}else it++;
}` | unknown | |
d379 | train | You can encode these as you would encode a binary number, by assigning increasing powers of two for each column. You want to multiply each row by c(1,2,4) and then take the sum.
# The multiplier, powers of two
x <- 2^(seq(ncol(df))-1)
x
## [1] 1 2 4
# The values
apply(df, 1, function(row) sum(row*x))
## row1 row2 row3
## 4 1 2
To add this as a new column:
df$new <- apply(df, 1, function(row) sum(row*x))
df
## X1 X2 X3 new
## row1 0 0 1 4
## row2 1 0 0 1
## row3 0 1 0 2
A: Try:
> df
X1 X2 X3
row1 0 0 1
row2 1 0 0
row3 0 1 0
>
>
> mm = melt(df)
No id variables; using all as measure variables
>
> mm$new = paste(mm$variable,mm$value,sep='_')
>
> mm
variable value new
1 X1 0 X1_0
2 X1 1 X1_1
3 X1 0 X1_0
4 X2 0 X2_0
5 X2 0 X2_0
6 X2 1 X2_1
7 X3 1 X3_1
8 X3 0 X3_0
9 X3 0 X3_0
mm$new is the column you want.
A: Maybe this is what you want:
> df$X1 = ifelse(df$X1==0,'green','yellow')
> df$X2 = ifelse(df$X2==0,'red','blue')
> df$X3 = ifelse(df$X3==0,'black','white')
>
> df
X1 X2 X3
row1 green red white
row2 yellow red black
row3 green blue black
>
> unlist(df)
X11 X12 X13 X21 X22 X23 X31 X32 X33
"green" "yellow" "green" "red" "red" "blue" "white" "black" "black" | unknown | |
d380 | train | You have quote issues as mentioned, but the main issue is you have inline event handlers in the generated html. That is not a good idea.
If you need to add actions to generated elements, use the
data-nameinlowercase="value"
on the elements, then assign the event handlers using
$("#container").on("event name","element selector",function() {
someFunction($(this).data("nameinlowercase"));
});
which will handle future elements too
In your case
'<div class="caret" data-togglename="ul#'+selectName+'-menu"></div>'
and
$("#caretParentContainerId").on("click",".caret",function() {
$($(this).data("togglename")).toggle();
});
where caretParentContainerId is the ID of the container that wraps the .caret elements
A: The problem is in the double quotes inside others double quotes, like this
<li onclick="something="something more" something else"></li>
If you want keep your code structure try this:
menu += '<li onclick="$(\'#' + selectName + '-text\').text(\'' + optStr + '\');$(\'#' + selectName + '-menu\').hide();$(\'input[name=' + selectName + ']\').prop(\'value\', \'' + optVal + '\');">' + optStr + '</li>';
but I think it's better you do something like this:
menu += '<li onclick="myFunction(' + selectName + ", " + optStr + ", " + optVal + ')">' + optStr + '</li>';
and create a function like this:
function myFunction(selectName, optStr, optVal) {
$("#" + selectName + "-text").text(optStr);
$("#" + selectName + "-menu").hide();
$("input[name='" + selectName + "']").prop("value", optVal);
}
With this is easier to debug and simplifly your code. | unknown | |
d381 | train | You can't nest your execute() like that.
The best solution is to toss that list of members into an array() once, close your connection, and THEN iterate that array and update each record.
It should look like this:
$select_members_info_stmt->bind_param('ssss', $leader, $member_1, $member_2, $member_3);
$select_members_info_stmt->execute();
$select_members_info_stmt->bind_result($selected_username, $level, $experience, $playergold, $required_experience);
$members = array();
while($select_members_info_stmt->fetch())
{
// tossing into the array
$members[] = array(
'selected_username' =>$selected_username,
'level' => $level,
'experience' => $experience,
'playergold' => $playergold,
'required_experience' => $required_experience
);
}
$select_members_info_stmt->close();
// Now iterate through the array and update the user stats
foreach ($members as $m) {
if($update_user_stats_stmt = $mysqli->prepare("UPDATE members SET level = ?, experience = ?, playergold = ? WHERE username = ?"))
{
// Note that you need to use $m['selected_username'] here.
$update_user_stats_stmt->bind_param('iiiiis', $new_level, $new_experience, $new_gold, $now, $cooldown, $m['selected_username']);
$update_user_stats_stmt->execute();
if($update_user_stats_stmt->affected_rows == 0)
{
echo '<div>Because of a system error it is impossible to perform a task, we apologize for this inconvience. Try again later.</div>';
}
$update_user_stats_stmt->close();
}
else
{
printf("Update user stats error: %s<br />", $mysqli->error);
}
}
A: You cannot nest actively running prepared statements on the same connection to mysql. Once you call execute() on any statement you cannot run another one on the same connection until that prepared statement is closed. Any fetches on the first prepared statement will fail once you start executing on the second one.
Only one 'live' statement can be prepared and running on the mysql server per connection
If you really need to nest your prepared statements, you could establish 2 separate mysqli connections. | unknown | |
d382 | train | I assume the delivery_confirmation method in reality returns a Mail object. The problem is that ActionMailer will call the deliver method of the mail object. You've set an expectation stubbing out the delivery_confirmation method but you haven't specified what should be the return value. Try this
mail_mock = double(deliver: true)
# or mail_mock = double(deliver_now: true)
expect(mail_mock).to receive(:deliver)
# or expect(mail_mock).to receive(:deliver_now)
allow(OrderMailer).to receive(:delivery_confirmation).with(order).and_return(mail_mock)
# the rest of your test code
A: If I got you right,
expect_any_instance_of(OrderMailer).to receive(:delivery_confirmation).with(order)
will test the mailer instance that will receive the call.
For more precision you may want to set up your test with the particular instance of OrderMailer (let's say order_mailer) and write your expectation the following way
expect(order_mailer).to receive(:delivery_confirmation).with(order) | unknown | |
d383 | train | When you are in the app on another screen and press back button that time you go to the back screen. and when your screen is home or login, and that time within two seconds you press twice the time back button app is closed.
public astTimeBackPress = 0;
public timePeriodToExit = 2000;
constructor(
public toastController: ToastController,
private platform: Platform,
private nav: NavController,
private router: Router,
) { }
handleBackButton() {
this.platform.backButton.subscribe(() => {
if (this.loaderOff) {
document.addEventListener(
'backbutton',
() => {},
false);
} else {
if (
this.router.url === '/tabs/home' ||
this.router.url === '/signin'
) {
if (new Date().getTime() - this.lastTimeBackPress < this.timePeriodToExit) {
navigator['app'].exitApp();
} else {
this.presentToast('Press again to exit');
this.lastTimeBackPress = new Date().getTime();
}
} else {
this.nav.back();
}
}
});
}
async presentToast(msg, color = 'dark') {
const toast = await this.toastController.create({
color,
message: msg,
duration: 3000,
showCloseButton: true,
closeButtonText: 'Close',
});
toast.present();
}
A: public void callAlert(){
AlertDialog.Builder builder1 = new AlertDialog.Builder(appCompatActivity);
builder1.setMessage("Do you want to close.");
builder1.setCancelable(true);
builder1.setPositiveButton(
"Yes",
new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int id) {
dialog.cancel();
finish();
}
});
builder1.setNegativeButton(
"No",
new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int id) {
dialog.cancel();
}
});
AlertDialog alert11 = builder1.create();
alert11.show();
}
@Override
public boolean onKeyDown(int keyCode, KeyEvent event) {
if (keyCode == KeyEvent.KEYCODE_BACK) {
callAlert();
return true;
}
return super.onKeyDown(keyCode, event);
} | unknown | |
d384 | train | You have following solution may be any one help you.
1) Add in .css and meta tags as follow.
html {
-webkit-text-size-adjust: none; /* Never autoresize text */
}
and meta tags as follow
<meta name='viewport' content='width=device-width; initial-scale=1.0; maximum-scale=1.0;'>
2) You can also inject both into an existing website, using this javascript code as follow.
var style = document.createElement(\"style\");
document.head.appendChild(style);
style.innerHTML = "html{-webkit-text-size-adjust: none;}";
var viewPortTag=document.createElement('meta');
viewPortTag.id="viewport";
viewPortTag.name = "viewport";
viewPortTag.content = "width=320; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;";
document.getElementsByTagName('head')[0].appendChild(viewPortTag);
and Use UIWebViewDelegate (webViewDidFinishLoad) method
- (void)webViewDidFinishLoad:(UIWebView *)webView{
NSString *javascript = @"var style = document.createElement(\"style\"); document.head.appendChild(style); style.innerHTML = \"html{-webkit-text-size-adjust: none;}\";var viewPortTag=document.createElement('meta');viewPortTag.id=\"viewport\";viewPortTag.name = \"viewport\";viewPortTag.content = \"width=320; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;\";document.getElementsByTagName('head')[0].appendChild(viewPortTag);";
[webView stringByEvaluatingJavaScriptFromString:javascript];
}
A: you can implement webView delegate and change font when load webView.
func webViewDidStartLoad(webView : UIWebView) {
//your code
} | unknown | |
d385 | train | You declared the generic type bound in the wrong place.
It should be declared within the declaration of the generic type parameter:
public final <T extends MyObject> T getObject(Class<T> myObjectClass)
{
//...
} | unknown | |
d386 | train | After many things, I could figure out the solution. In fact, there is no problem to make a window 100 x 50 px or even smaller. The problem is that I had to close the window of the simulator before run again. In my case, such a small window had no Title bar and no close button. So I had to publish with the Title bar, close it, stop the running and then I could run again with the new size and position I give to the window.
So, without closing the window in the simulator it is not possible to see the results of a new size. | unknown | |
d387 | train | I think inheritance is a good approach to this problem.
I can think of two down sides:
*
*It is possible to create additional columns to the inheritance children. If you control DDL, you can probably prevent that.
*You still have to create and modify indexes on all inheritance children individually.
If you are using PostgreSQL v11 or later, you could prevent both problems by using partitioning. The individual tables would then be partitions of the “template” table. This way, you can create indexes centrally by creating a partitioned index on the template table. The disadvantage (that may make this solution impossible) is that you need a partitioning key column in the table. | unknown | |
d388 | train | You first need to get a list of the user friends calling /me?fields=friends. Then, you can only add their id to the picture urls just like you did with the user:
<img src="https://graph.facebook.com/{{friend_id}}/picture"> | unknown | |
d389 | train | Your logic to get the new position is correct, but in your Update() function, you have to update the position of the camera using transform.position, assuming this script is a component you have added to the Camera in the scene.
// Update is called once per frame
void Update()
{
Vector3 newpos = Playerposition.position + cameraoffset;
transform.position = newpos;
}
If this script isn't on the camera, you'll need a reference to the camera by taking it as an input in the Unity inspector (declaring public Camera cam; at the top of your class) and then set in in the inspector by dragging the camera object onto that input. Then you can do cam.transform.position = newpos; in Update(). | unknown | |
d390 | train | You likely have overridden get_api_root_view without providing the api_url argument since it's already part of DRF: https://github.com/tomchristie/django-rest-framework/blob/master/rest_framework/routers.py#L292
A: I had the same error. I found I had an older version of drf-extensions. I have a feeling drf-extensions overrides the get_api_root_view method, and when it's not in sync with your version of Django Rest Framework, this can cause a problem (ie. drf-extensions is passing a parameter that DRF no longer expects, but in previous versions was acceptable).
If it's not drf-extensions specifically, it's probably something else that's overriding get_api_root_view as Linovia suggested. | unknown | |
d391 | train | You probably have register_globals turned on so $classes gets mixed with $_SESSION['classes'] at some point.
You should turn them off. (Here's why.)
Or, if turning them off is not possible due to whatever reason, change variable names.
A: Got it!
Here's my new code:
<?php
$classesBeingTaught[] = explode(",", $_SESSION['classes']);
foreach ($classesBeingTaught[0] as $classBeingTaught) {
echo "<option>".$classBeingTaught."</option>";
}
?> | unknown | |
d392 | train | Your code uses the old and deprecated not/1 predicate, which apparently is not supported in the Prolog system you're using, hence the existence error. Use instead the standard \+/1 predicate/prefix operator:
is_not_immune_to(Pkmn, AtkType) :-
is_type(Pkmn, Type), \+ immune(Type, AtkType).
With this change, you get for your sample call:
| ?- is_not_immune_to(charizard, ground).
true ? ;
no | unknown | |
d393 | train | Your code is sound. You just need to include this in the beginning of your ui:
ui <- fluidPage(
useShinyjs(), # add this
# rest of your ui code
) | unknown | |
d394 | train | use this
$fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30);
instead of
$fp = fsockopen ('ssl://www.sandbox.paypal.com', 443, $errno, $errstr, 30);
i think it is better to use CURL instead of socket | unknown | |
d395 | train | If you want the Edit control to be different than the standard control, you should use the "EditItemTemplate". This will allow the edit row to have different controls, values, etc... when the row's mode changes.
Example:
<Columns>
<asp:TemplateField HeaderText="PC">
<ItemTemplate>
<asp:CheckBox ID="CheckBox1" runat="server" Checked='<%# Eval("82_PC").ToString() == "1" ? true:false %>' Enabled="false" />
</ItemTemplate>
<EditItemTemplate>
<asp:CheckBox ID="CheckBox1" runat="server" Checked="true" Enabled="false" />
</EditItemTemplate>
</asp:TemplateField>
</Columns>
A: I guess you could loop through all the rows of the GridView and enable the checkboxes something like below:
protected void grd_Bookcode_RowCommand(object sender, GridViewCommandEventArgs e)
{
if (e.CommandName == "Edit")
{
for (int index = 0; index < GridView1.Rows.Count; index++)
{
CheckBox chk = grd_Bookcode.Rows[index].FindControl("CheckBox" + index + 1) as CheckBox;
chk.Enabled = true;
}
}
}
Hope this helps!! | unknown | |
d396 | train | To correct this problem I had to
*
*uninstall app on Android phone (important step)
*Unload Android Project from solution explorer
*This brings up the project file code now search code for
<EmbedAssembliesIntoApk>false</EmbedAssembliesIntoApk>
*Change false to true save
*reload project problem solved.
Note leave fast deploy checked.
A: Go to >> Solution Properties>> Android Options>> Uncheck "Use Fast Deployment(debug mode only)"
A: I had the same issue recently after adding images to my project found out that I had upper case letters as a name SomePicture.png renaming all images to lower case solved it.
A: @AlwinBrabu, I think you meant "Project Properties" -> Android Options -> Uncheck Fast Deployment(debug mode only).
This worked for me, although this is a workaround. I do not consider it a solution.
A: To solve this, I had to right-click on the Android project in the Solution Explorer, then in Options -> Android Build uncheck the Fast Assembly Deployment option.
Then deploy the project on the Android emulator.
But after deploying it once, I went back to the settings and checked (i.e. ticked) the Fast Assembly Deployment option, and subsequent deploys worked fine.
I'm running Visual Studio for Mac 2022 version 17.0.1 (build 72). | unknown | |
d397 | train | It is possible but your type must be global
create type array_t is varray(2) of int;
Then use array as a table (open p for only for compiling)
declare
array_test array_t := array_t(10,11);
p sys_refcursor;
begin
open p for
select * from STATISTIK where abschluss1 in (select column_value from table(array_test ));
end; | unknown | |
d398 | train | You can try via pd.to_numeric() and then fill NaN's:
df['Feature2']=pd.to_numeric(df['Feature2'], errors="coerce").fillna(df['Feature2'])
OR
go with the where() condition by filling those NaN's with fillna() in your condition ~df.Feature2.str.isnumeric():
df['Feature2']=df['Feature2'].where(~df.Feature2.str.isnumeric().fillna(True),
pd.to_numeric(df.Feature2, errors="coerce").astype("Int64")
) | unknown | |
d399 | train | You seem to be saying that you want your function to take a variable number of separate arrays as arguments, and then find the maximum number within any of those arrays.
If so, you can say [].concat(...arguments) to create a single new array with all of the values from the individual arrays that were arguments, then use the spread operator to pass that new array to Math.max(). (You don't need a loop.)
var firstArr = [1,2,3,4,5];
var secondArr = [6,7,8,9];
function myFun() {
var resl = Math.max(...[].concat(...arguments));
console.log("The maximum value is " + resl);
}
myFun(firstArr, secondArr);
A: It sounds like what you are trying to do is
function myFun(...arrays) {
const allValues = [].concat(...arrays);
return Math.max(...allValues);
}
console.log("The maximum value is " + myFun([1,2,3,4,5], [6,7,8,9]));
However I would recommend to avoid spread syntax with potentially large data, and go for
function myFun(...arrays) {
return Math.max(...arrays.map(arr => Math.max(...arr)));
}
or even better
function myFun(...arrays) {
return arrays.map(arr => arr.reduce(Math.max)).reduce(Math.max);
}
A: You can use rest element at function declaration, pass an array of arrays each preceded by spread element to function parameters and spread element within function
function myFun(...arr) {
return Math.max.apply(Math, ...arr)
}
myFun([...firstArr, ...secondArr /*, ...nArr*/ ]); | unknown | |
d400 | train | The web service does not enable the type-based optimizations by default. So to get the equivalent functionality:
java -jar compiler.jar --compilation_level ADVANCED_OPTIMIZATIONS
--use_types_for_optimization=false
--js /code/built.js --js_output_file compiledCode.js
The web service also assumes any undefined symbol is an external library. For this reason it is not recommended for production use. | unknown |